# Aimlapi > 3D-generating models are AI-powered tools designed to create three-dimensional objects, environments, and textures based on input data such as text prompts, reference images, or existing 3D models. Th --- # Source: https://docs.aimlapi.com/api-references/3d-generating-models.md # 3D-Generating Models ## Overview 3D-generating models are AI-powered tools designed to create three-dimensional objects, environments, and textures based on input data such as text prompts, reference images, or existing 3D models. These models utilize advanced techniques like neural rendering, implicit representations, and generative adversarial networks (GANs) to produce high-quality, realistic 3D assets. They are widely used in gaming, virtual reality (VR), augmented reality (AR), and industrial design. We currently support only one 3D-generating model. You can find [its ID along with the API reference link](#all-available-3d-generating-models) at the end of the page. ## **Key Features** * **Text-to-3D Generation** – Create 3D models directly from descriptive text prompts. * **Image-to-3D Conversion** – Generate 3D objects from 2D images using deep learning techniques. * **Mesh and Texture Generation** – Produce detailed 3D meshes with realistic textures. * **Scene Composition** – Generate entire 3D environments with lighting and object placement. * **High-Fidelity Rendering** – Utilize neural rendering for enhanced visual quality. * **Scalability & Efficiency** – Optimize generation speed and memory usage for large-scale applications. ## Example {% code overflow="wrap" %} ```python import requests def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "triposr", "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Fly_Agaric_mushroom_05.jpg/576px-Fly_Agaric_mushroom_05.jpg", }, ) response.raise_for_status() data = response.json() url = data["model_mesh"]["url"] file_name = data["model_mesh"]["file_name"] mesh_response = requests.get(url, stream=True) with open(file_name, "wb") as file: for chunk in mesh_response.iter_content(chunk_size=8192): file.write(chunk) if __name__ == "__main__": main() ``` {% endcode %} **Response**: For clarity, we took several screenshots of our mushroom from different angles in an online GLB viewer. As you can see, the model understands the shape, but preserving the pattern on the back side (which was not visible on the reference image) could be improved:
Compare them with the [reference image](https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Fly_Agaric_mushroom_05.jpg/576px-Fly_Agaric_mushroom_05.jpg):
{% hint style="info" %} Try to choose reference images where the target object is not obstructed by other objects and does not blend into the background. Depending on the complexity of the object, you may need to experiment with the resolution of the reference image to achieve a satisfactory mesh. {% endhint %} ## All Available 3D-Generating Models
Model ID + API Reference linkDeveloperContextModel Card
triposrTripo AIStable TripoSR 3D
tencent/hunyuan-partTencentHunyuan Part
--- # Source: https://docs.aimlapi.com/api-references/service-endpoints/account-balance.md # Account Balance ## Get account balance info You can query your account balance and other billing details through this API.\ To make a request, you only need your AIMLAPI key obtained from your [account dashboard](https://aimlapi.com/app/keys).
## GET /v1/billing/balance > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/billing/balance":{"get":{"operationId":"_v1_billing_balance","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"–","title":"–"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"balance":{"type":"number","description":"The total credits associated with the provided API key."},"lowBalance":{"type":"boolean","description":"True if the balance is below the threshold."},"lowBalanceThreshold":{"type":"number","description":"Threshold for switching to low balance status."},"lastUpdated":{"type":"string","format":"date-time","description":"The date of the request — i.e., the current date."},"autoDebitStatus":{"type":"string","description":"Indicates whether auto top-up is enabled for the plan."},"status":{"type":"string","description":"The status of the plan associated with the provided API key."},"statusExplanation":{"type":"string","description":"A more detailed explanation of the plan status."}},"required":["balance","lowBalance","lowBalanceThreshold","lastUpdated","autoDebitStatus","status","statusExplanation"]}}}}}}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/video-models/runway/act_two.md # act\_two {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `runway/act_two` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This video-to-video model lets you animate characters using reference performance videos. Simply provide a video of someone acting out a scene along with a character reference (image or video), and Act-Two will transfer the performance to your character — including natural motion, speech, and facial expressions. ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Video Generation You can generate a video using this API. In the basic setup, you only need an image or video URL for the character (`character`), and a video URL for body movements and/or facial expressions (`reference`). ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["runway/act_two"]},"character":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["video"]},"url":{"type":"string","format":"uri"}},"required":["type","url"],"description":"A video of your character. In the output, the character will use the reference video performance in its original animated environment and some of the character's own movements."},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"],"description":"An image of your character. In the output, the character will use the reference video performance in its original static environment."}],"description":"The character to control. You can either provide a video or an image. A visually recognizable face must be visible and stay within the frame."},"reference":{"type":"object","properties":{"type":{"type":"string","enum":["video"]},"url":{"type":"string","format":"uri"}},"required":["type","url"],"description":"Passing a video reference allows the model to emulate the style or content of the reference in the output."},"frame_size":{"type":"string","enum":["1280:720","720:1280","1104:832","832:1104","960:960","1584:672","848:480","640:480"],"default":"1280:720","description":"The width and height of the video."},"body_control":{"type":"boolean","description":"A boolean indicating whether to enable body control. When enabled, non-facial movements and gestures will be applied to the character in addition to facial expressions."},"expression_intensity":{"type":"integer","minimum":1,"maximum":5,"default":3,"description":"An integer between 1 and 5 (inclusive). A larger value increases the intensity of the character's expression."},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."}},"required":["model","character","reference"],"title":"runway/act_two"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server
How it works As the character reference, we will use a scan of a famous Leonardo da Vinci painting. For the motion reference, we will use a video of a cheerful woman dancing, generated with the [kling-video/v1.6/pro/text-to-video](https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-text-to-video) model. | Character reference image | Motion reference video | | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | | | We combine both POST and GET methods above in one program: first it sends a video generation request to the server, then it checks for results every 10 seconds. {% hint style="warning" %} Don’t forget to replace `` with your actual AI/ML API key from your [API Key management page](https://aimlapi.com/app/keys/) — in **both** places in the code! {% endhint %}
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/runway/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "runway/act_two", "character": { "type":"image", "url":"https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg" }, "reference": { "type":"video", "url": "https://zovi0.github.io/public_misc/kling-video-v1.6-pro-text-to-video-dancing-woman-output.mp4" }, "frame_size":"1280:720", "body_control":True, "expression_intensity":3 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/runway/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1800 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'dbf7a50e-87b2-4ba5-921f-f02fdb8f7cc6', 'status': 'queued'} Generation ID: dbf7a50e-87b2-4ba5-921f-f02fdb8f7cc6 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'dbf7a50e-87b2-4ba5-921f-f02fdb8f7cc6', 'status': 'completed', 'video': ['https://cdn.aimlapi.com/wolf/d462f7e3-bdc6-43ac-8c2a-ac2d61dea014.mp4?_jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXlIYXNoIjoiNzZmNzY0NDRiZTViYWI2YyIsImJ1Y2tldCI6InJ1bndheS10YXNrLWFydGlmYWN0cyIsInN0YWdlIjoicHJvZCIsImV4cCI6MTc1NDc4NDAwMH0._q7rh2fmm7a16k7UHAnDh3aUOIy-fT8NJO3hP-KT4_s']} ``` {% endcode %}
**Processing time**: \~45 sec. **Original**: [784×1168](https://drive.google.com/file/d/1QzqNY6tZdyDh1P5mn3_7QsAPOeoUqtYA/view?usp=sharing) **Low-res GIF preview**:

Low-resolution GIF preview

--- # Source: https://docs.aimlapi.com/integrations/agno.md # Agno ## About [Agno](https://app.agno.com/) is a lightweight library for building **Agents** (AI programs that operate autonomously). The core of an Agent is a model, tools and instructions. Agents also have memory, knowledge, storage and the ability to reason. Developers use Agno to build Reasoning Agents, Multimodal Agents, Teams of Agents and Agentic Workflows. Agno also provides a beautiful UI to chat with your Agents, pre-built FastAPI routes to serve your Agents and tools to monitor and evaluate their performance. {% hint style="success" %} No data is sent to [agno.com](https://app.agno.com), all agent data is stored locally in your sqlite database!\ The playground app is available to [run locally](https://docs.agno.com/introduction/playground) if you prefer working offline! {% endhint %} ## Installation ```sh pip install -U agno ``` ## How to Use AIML API with Agno A user of the Agno can {% code overflow="wrap" %} ```python from agno.models.aimlapi import AIMLApi agent = Agent( model=AIMLApi( id="gpt-4o", api_key="" ), markdown=True, telemetry=False, monitoring=False ) agent.print_response("Tell me, why is the sky blue in 2 sentences") ``` {% endcode %}
Response ``` ┌─ Message ───────────────────────────────────────────────────────────────────┐ │ │ │ Tell me, why is the sky blue in 2 sentences │ │ │ └─────────────────────────────────────────────────────────────────────────────┘ ┌─ Response (2.5s) ───────────────────────────────────────────────────────────┐ │ │ │ The sky appears blue due to a phenomenon called Rayleigh scattering. This │ │ scattering effect preferentially disperses shorter wavelengths of light, │ │ such as blue and violet, more than longer wavelengths like red and orange. │ │ │ └─────────────────────────────────────────────────────────────────────────────┘ ```
## **Our Supported models** * All OpenAI-compatible models ([gpt-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o), [gpt-4o-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini), [gpt-4-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo), [gpt-3.5-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo), [o3-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3-mini), [o1](https://docs.aimlapi.com/api-references/text-models-llm/openai/o1), etc), * [Google models](https://docs.aimlapi.com/api-references/text-models-llm/google), * [Anthropic models](https://docs.aimlapi.com/api-references/text-models-llm/anthropic) is only partially supported and only via `api.aimlapi.com/v2` base URL, * and some other models (the list is constantly being updated). ## **Supported features** * Synchronous and asynchronous requests * Chain-of-thought reasoning * Built-in RAG and multimodal support * Collaborative agent workflows (Teams) * Access to built-in tools (DuckDuckGo, Docker, and many more) ## Code Examples
Prerequisites 1\. Create and activate a virtual environment ```bash python3 -m venv ~/.venvs/aienv source ~/.venvs/aienv/bin/activate ``` 2\. Export your [AIMLAPI\_API\_KEY](https://aimlapi.com/app/keys) ```bash export AIMLAPI_API_KEY=*** ``` 3\. Install libraries ```bash pip install -U openai duckduckgo-search duckdb yfinance agno ```
### Stream mode {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from agno.agent import Agent, RunResponse # noqa from agno.models.aimlapi import AIMLApi agent = Agent(model=AIMLApi(id="gpt-4o-mini"), markdown=True) # Get the response in a variable # run_response: Iterator[RunResponse] = agent.run("Share a 2 sentence horror story", stream=True) # for chunk in run_response: # print(chunk.content) # Print the response in the terminal agent.print_response("Share a 2 sentence horror story", stream=True) ``` {% endcode %} {% endtab %} {% endtabs %} ### Image agent {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from agno.agent import Agent from agno.media import Image from agno.models.aimlapi import AIMLApi agent = Agent( model=AIMLApi(id="meta-llama/Llama-3.2-11B-Vision-Instruct-Turbo"), markdown=True, ) agent.print_response( "Tell me about this image", images=[ Image( url="https://upload.wikimedia.org/wikipedia/commons/0/0c/GoldenGateBridge-001.jpg" ) ], stream=True, ) ``` {% endcode %} {% endtab %} {% endtabs %} ### Tool use {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python """Run `pip install duckduckgo-search` to install dependencies.""" from agno.agent import Agent from agno.models.aimlapi import AIMLApi from agno.tools.duckduckgo import DuckDuckGoTools agent = Agent( model=AIMLApi(id="gpt-4o-mini"), markdown=True), tools=[DuckDuckGoTools()], show_tool_calls=True, markdown=True, debug_mode=True, ) agent.print_response("Whats happening in France?") ``` {% endcode %} {% endtab %} {% endtabs %} ## More For further information about the framework, please check [the official Agno documentation](https://docs.agno.com/introduction). For additional examples, check out our [repo](https://github.com/D1m7asis/agno-aimlapi/tree/63522cb6c302f88d7a40ab734ee037ca8dc73d23/cookbook/models/aimlapi). --- # Source: https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine.md # AI Search Engine ## Overview AI Web Search Engine is designed to retrieve real-time information from the internet. This solution processes user queries and return relevant data from various online sources, making them useful for tasks that require up-to-date knowledge beyond static datasets. It supports **two** usage options: {% stepper %} {% step %} **Using six specialized API endpoints**, each designed to search for only one specific type of information. These endpoints return structured responses, making them more suitable for integration into specialized services (e.g., a weather widget). Here are the types of information you can retrieve this way: * [Links](https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-links) * [Images](https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-images) * [Videos](https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-videos) * [Weather details for a specified location](https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-the-weather) * [Locations](https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-a-local-map) * [Knowledge about a topic, structured as a small knowledge base](https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/get-a-knowledge-structure) See API references and examples on the subpages. {% endstep %} {% step %} **As a general** [**chat completion**](https://docs.aimlapi.com/capabilities/completion-or-chat-models) **solution** (but searching on the internet): enter a query in the prompt and receive an internet-sourced answer, similar to asking a question on a search engine through a browser. See the API Schema below or check how this call is made in the Python example on the bottom of this page. {% endstep %} {% endstepper %} ## How to make a call Check how this call is made in the [examples](#example-1) below. {% hint style="success" %} Note that queries can include advanced search syntax: * **Search for an exact match:** Enter a word or phrase using `\"` before and after it.\ For example, `\"tallest building\"`. * **Search for a specific site:** Enter `site:` in front of a site or domain.\ For example, `site:youtube.com cat videos`. * **Exclude words from your search:** Enter `-` in front of a word that you want to leave out.\ For example, `jaguar speed -car`. {% endhint %} {% hint style="success" %} You can also personalize the AI Search Engine output by passing the `ip` parameter.\ See [Example #2](#example-2-using-the-ip-parameter-for-personalized-model-output) below. {% endhint %} ### API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bagoodex/bagoodex-search-v1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"user":{"type":"string"},"best_of":{"type":"integer","nullable":true,"minimum":1},"use_beam_search":{"type":"boolean","nullable":true},"length_penalty":{"type":"number","nullable":true},"early_stopping":{"type":"boolean","nullable":true},"ignore_eos":{"type":"boolean","nullable":true},"min_tokens":{"type":"integer","nullable":true},"stop_token_ids":{"type":"array","nullable":true,"items":{"type":"integer"}},"skip_special_tokens":{"type":"boolean","nullable":true},"spaces_between_special_tokens":{"nullable":true},"add_generation_prompt":{"type":"boolean","nullable":true,"description":"If True, the generation prompt will be added to the chat template. This is a parameter used by chat template in tokenizer config of the model."},"add_special_tokens":{"type":"boolean","nullable":true,"description":"If True, special tokens (e.g. BOS) will be added to the prompt on top of what is added by the chat template. For most models, the chat template takes care of adding the special tokens so this should be set to False (as is the default)."},"documents":{"type":"array","nullable":true,"items":{"type":"object","additionalProperties":{"type":"string"}},"description":"'A list of dicts representing documents that will be accessible to the model if it is performing RAG (retrieval-augmented generation). If the template does not support RAG, this argument will have no effect. We recommend that each document should be a dict containing \"title\" and \"text\" keys."},"chat_template":{"type":"string","nullable":true,"description":"A Jinja template to use for this conversion. If this is not passed, the model's default chat template will be used instead."},"chat_template_kwargs":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional kwargs to pass to the template renderer. Will be accessible by the chat template"},"include_stop_str_in_output":{"type":"boolean","nullable":true,"description":"Whether to include the stop string in the output. This is only applied when the stop or stop_token_ids is set"},"guided_json":{"anyOf":[{"type":"string"},{"type":"object","additionalProperties":{"nullable":true}},{"nullable":true}],"description":"If specified, the output will follow the JSON schema."},"guided_regex":{"type":"string","nullable":true,"description":"If specified, the output will follow the regex pattern."},"guided_choice":{"type":"array","nullable":true,"items":{"type":"string"},"description":"If specified, the output will be exactly one of the choices."},"guided_grammar":{"type":"string","nullable":true,"description":"If specified, the output will follow the context free grammar."},"guided_decoding_backend":{"type":"string","nullable":true,"enum":["outlines","lm-format-enforcer"],"description":"If specified, will override the default guided decoding backend of the server for this specific request. If set, must be either 'outlines' / 'lm-format-enforcer'"},"guided_whitespace_pattern":{"type":"string","nullable":true,"description":"If specified, will override the default whitespace pattern for guided json decoding."},"ip":{"type":"string","format":"ip","description":"IP from which a request is executed"}},"required":["model","messages"],"title":"bagoodex/bagoodex-search-v1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Example #1 {% tabs %} {% tab title="Python" %} ```python import requests from openai import OpenAI # Insert your AIML API Key instead of : API_KEY = '' API_URL = 'https://api.aimlapi.com' def complete_chat(): client = OpenAI( base_url=API_URL, api_key=API_KEY, ) response = client.chat.completions.create( model="bagoodex/bagoodex-search-v1", messages=[ { "role": "user", # Enter your query here "content": 'how to make a slingshot', }, ], ) print(response.choices[0].message.content) # Run the function complete_chat() ``` {% endtab %} {% tab title="JavaScript" %} ```javascript // Insert your AIML API Key instead of : const API_KEY = ''; const API_URL = 'https://api.aimlapi.com/v1/chat/completions'; async function completeChat() { const requestBody = { model: "bagoodex/bagoodex-search-v1", messages: [ { role: "user", content: "how to make a slingshot" } ] }; try { const response = await fetch(API_URL, { method: "POST", headers: { "Content-Type": "application/json", "Authorization": `Bearer ${API_KEY}` }, body: JSON.stringify(requestBody) }); const data = await response.json(); console.log(data.choices[0].message.content); } catch (error) { console.error("Error fetching completion:", error); } } // Run the function completeChat(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` To make a slingshot, you can follow the instructions provided in the two sources: **Option 1: Make a Giant Slingshot** * Start by cutting two 2x4's to a length of 40 inches each, which will be the main arms of the slingshot. * Attach the arms to a base made of plywood using screws, and then add side braces to support the arms. * Install an exercise band as the launching mechanism, making sure to tighten it to achieve the desired distance. * Add a cross brace to keep the arms rigid and prevent them from spreading or caving in. **Option 2: Make a Stick Slingshot** * Find a sturdy, Y-shaped stick and break it down to the desired shape. * Cut notches on the ends of the stick to hold the rubber bands in place. * Create a pouch by folding a piece of fabric in half and then half again, and then cutting small holes for the rubber bands. * Thread the rubber bands through the holes and tie them securely to the stick using thread. * Decorate the slingshot with coloured yarn or twine if desired. You can choose to make either a giant slingshot or a stick slingshot, depending on your preference and the materials available. ``` {% endcode %}
## Example #2: Using the IP Parameter for Personalized Model Output When using regular search engines in browsers, we can simply ask, '*Weather today*' without specifying our location. In this case, the search engine automatically uses your IP address to determine your location and provide a more relevant response. The AI Search Engine also supports IP-based personalization. In the example below, the query does not specify a city, but since the request includes an IP address registered in Stockholm, the system automatically adjusts, and the response will contain today's weather forecast for that city. {% hint style="warning" %} Note that when making a request via Python, the `ip` parameter should be included inside the `extra_body` parameter (see example below). When using other languages, this is not required, and the `ip` parameter can be passed like any other parameter. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests from openai import OpenAI # Insert your AIML API Key instead of : API_KEY = '' API_URL = 'https://api.aimlapi.com' # Call the standart chat completion endpoint to get an ID def complete_chat(): client = OpenAI( base_url=API_URL, api_key=API_KEY, ) response = client.chat.completions.create( model="bagoodex/bagoodex-search-v1", messages=[ { "role": "user", "content": "Weather today", }, ], # insert your IP into this section extra_body={ 'ip': '192.44.242.19' # we used a random IP address from Stockholm } ) print(response.choices[0].message.content) return response # Run the function complete_chat() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript import fetch from 'node-fetch'; // Insert your AIML API Key instead of : const API_KEY = ''; const API_URL = 'https://api.aimlapi.com/v1/chat/completions'; async function completeChat() { const requestBody = { model: "bagoodex/bagoodex-search-v1", messages: [ { role: "user", content: "Weather today" } ], extra_body: { ip: "192.44.242.19" // We used a random IP address from Stockholm } }; try { const response = await fetch(API_URL, { method: 'POST', headers: { 'Content-Type': 'application/json', 'Authorization': `Bearer ${API_KEY}` }, body: JSON.stringify(requestBody) }); const data = await response.json(); console.log(data.choices[0].message.content); return data; } catch (error) { console.error('Error:', error); } } // Run the function completeChat(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response When Using IP Parameter {% code overflow="wrap" %} ``` "According to the forecast, today's weather in Stockholm is partly cloudy with light winds. The temperature is expected to be around 6°C (43°F) with a gentle breeze. \n\nThe forecast also mentions that the weather will be sunny intervals and light winds throughout the day." ``` {% endcode %}
{% hint style="warning" %} Keep in mind that the system caches the IP address for a period of two weeks. This means that after specifying an IP address once, any queries **without an explicit location** will continue to return responses linked to Stockholm for the next two weeks, even if you don't include the IP address in subsequent requests. If you need to change the location, simply provide a new IP address in your next request. {% endhint %} If an IP address registered in one location is used while explicitly specifying a different location in the query, AI Search Engine will prioritize the location from the query:
Response when the IP parameter is used (from Stockholm), but the request also includes a different location (San Francisco) {% code overflow="wrap" %} ``` "According to the weather forecast, today in San Francisco, there will be a strong cold front moving through the Bay Area from late morning into the afternoon, boosting wind speeds with gusts at around 45 mph midday and featuring high rain rates at times. This may lead to localized runoff issues. The high temperature is expected to be around 56F, with a chance of rain 100% and rainfall near a half an inch. \n\nYou can check the latest forecast and weather conditions on websites such as [https://weather.com/weather/today/l/USCA0987:1:US] or [https://www.accuweather.com/en/us/san-francisco/94103/weather-forecast/347629]." ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/integrations/aider.md # Aider ## About [Aider](https://aider.chat/) is a command-line pair programming tool that connects to OpenAI-compatible APIs. It lets you chat with models to edit your codebase, auto-commit changes, and build software collaboratively from the terminal. This guide explains how to connect **AI/ML API** to **Aider** using the **OpenAI-compatible** flow.\ You’ll get a clean setup with **one endpoint**, support for **slashes in model names**, and **full compatibility** with all chat-completion models. *** ## Quick Setup
FieldValue
Base URLhttps://api.aimlapi.com/v1
API KeyYour AI/ML API key (create at aimlapi.com/app/keys)
Modelopenai/anthropic/claude-3.7-sonnet (openai/<your_full_model_id>)
Command Exampleaider --model openai/chatgpt-4o-latest
{% hint style="success" %} **Tip:** Always include the `openai/` prefix (case-sensitive) before your model name. This ensures Aider correctly routes requests to your **AI/ML API** endpoint. {% endhint %} *** ## Installation ### ✅ Prerequisites * AI/ML API key * Python 3.8–3.13 installed * Internet access to `api.aimlapi.com` * Aider installed ([Install Guide](https://aider.chat/docs/install.html)) *** ### 1️⃣ Install Aider ```bash python -m pip install aider-install aider-install ```

Install Aider via terminal

*** ### 2️⃣ Configure AI/ML API credentials **Mac/Linux** ```bash export OPENAI_API_BASE=https://api.aimlapi.com/v1 export OPENAI_API_KEY= ``` **Windows (PowerShell)** ```powershell setx OPENAI_API_BASE https://api.aimlapi.com/v1 setx OPENAI_API_KEY # restart your terminal ``` *** ### 3️⃣ Run Aider with AI/ML API Move into your project directory: ```bash cd /to/your/project ``` Then launch Aider with your preferred model: ```bash # GPT-4o (OpenAI) aider --model openai/chatgpt-4o-latest # DeepSeek Chat V3 aider --model openai/deepseek/deepseek-chat # Claude 3.7 Sonnet aider --model openai/anthropic/claude-3.7-sonnet # Gemini 1.5 Pro aider --model openai/google/gemini-1.5-pro ```
Running Aider with AI/ML API model

Running Aider with AI/ML API model

*** ### 4️⃣ Model Prefix Rule Aider automatically routes requests to your `OPENAI_API_BASE`.\ To connect to **AI/ML API**, **always prefix your model with `openai/`**. **Pattern:** ``` openai// ``` **Examples:** * `openai/chatgpt-4o-latest` * `openai/deepseek/deepseek-chat` * `openai/anthropic/claude-3.7-sonnet` * `openai/google/gemini-1.5-pro` *** ## Example Aider Session ```bash cd ~/workspace/myapp aider --model openai/chatgpt-4o-latest ``` Aider will: 1. Load your project map. 2. Analyze the repo. 3. Apply AI-suggested edits. 4. Commit changes automatically.
Aider researching your repo

Aider researching your repo

Aider working on code changes

Aider working on code changes

*** ## Common Pitfalls * ***Bad request – check parameters*** → verify the model name and prefix * ***Unknown model*** → confirm it exists in [AI/ML API Models](https://aimlapi.com/models?utm_source=aider\&utm_medium=github\&utm_campaign=integration) * ***Invalid API key*** → re-copy from [AI/ML API Dashboard](https://aimlapi.com/app/keys) * ***No response*** → check `OPENAI_API_BASE` and your internet access *** ## 📚 References * [Dashboard & API Keys](https://aimlapi.com/app) * [Model Catalog](https://aimlapi.com/models) * [Aider GitHub](https://github.com/Aider-AI/aider) * [Aider Installation Docs](https://aider.chat/docs/install.html) --- # Source: https://docs.aimlapi.com/api-references/embedding-models/alibaba-cloud.md # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/alibaba-cloud.md # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud.md # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud.md # Alibaba Cloud - [qwen-max](/api-references/text-models-llm/alibaba-cloud/qwen-max.md) - [qwen-plus](/api-references/text-models-llm/alibaba-cloud/qwen-plus.md) - [qwen-turbo](/api-references/text-models-llm/alibaba-cloud/qwen-turbo.md) - [Qwen2.5-7B-Instruct-Turbo](/api-references/text-models-llm/alibaba-cloud/qwen2.5-7b-instruct-turbo.md) - [Qwen2.5-72B-Instruct-Turbo](/api-references/text-models-llm/alibaba-cloud/qwen2.5-72b-instruct-turbo.md) - [Qwen3-235B-A22B](/api-references/text-models-llm/alibaba-cloud/qwen3-235b-a22b.md) - [qwen3-32b](/api-references/text-models-llm/alibaba-cloud/qwen3-32b.md) - [qwen3-coder-480b-a35b-instruct](/api-references/text-models-llm/alibaba-cloud/qwen3-coder-480b-a35b-instruct.md) - [qwen3-235b-a22b-thinking-2507](/api-references/text-models-llm/alibaba-cloud/qwen3-235b-a22b-thinking-2507.md) - [qwen3-next-80b-a3b-instruct](/api-references/text-models-llm/alibaba-cloud/qwen3-next-80b-a3b-instruct.md) - [qwen3-next-80b-a3b-thinking](/api-references/text-models-llm/alibaba-cloud/qwen3-next-80b-a3b-thinking.md) - [qwen3-max-preview](/api-references/text-models-llm/alibaba-cloud/qwen3-max-preview.md) - [qwen3-max-instruct](/api-references/text-models-llm/alibaba-cloud/qwen3-max-instruct.md) - [qwen3-omni-30b-a3b-captioner](/api-references/text-models-llm/alibaba-cloud/qwen3-omni-30b-a3b-captioner.md) - [qwen3-vl-32b-instruct](/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-instruct.md) - [qwen3-vl-32b-thinking](/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-thinking.md) --- # Source: https://docs.aimlapi.com/use-cases/animate-images-a-childrens-encyclopedia.md # Animate Images: A Children’s Encyclopedia {% hint style="warning" %} **Legal Notice**\ Please remember that reference images may be subject to copyright. Make sure to respect the law and avoid sharing the animated versions online if doing so could infringe intellectual property rights.\ Just use them to bring a bit of joy to kids at home :tada: {% endhint %} ## Idea and Step-by-Step Plan Today, we’re going to bring a page from a children’s encyclopedia to life — with pictures! Here’s the plan: 1. Take an article from some free encyclopedia for children. (Of course, you can use a children's story, short illustrated tales, or any other suitable content.) To keep it simple, we’ll focus only on text and illustrations. 2. Based on each illustration, a smart multimodal[^1] chat model [**gpt-4o**](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) comes up with a short video idea — a little scene that matches the content. Model generates a video model prompt. 3. With this prompt, generate a 5-second video using video model and download the generated video from the server. 4. Convert it to a GIF using any free online tool. 5. Replace the original static image with the animated GIF. Repeat this process for every illustration on the page. ## A Page We’re Bringing to Life
Article Example *** ***What Are Raccoons?*** *Raccoons are small, furry animals with fluffy striped tails and black “masks” around their eyes. They live in forests, near rivers and lakes—and sometimes even close to people in towns and cities. Raccoons are very clever, curious, and quick with their paws.*
*One of the raccoon's most famous habits is "washing" its food. But raccoons aren’t really cleaning their meals. They just love to roll and rub things between their paws, especially near water. Scientists believe this helps them understand what they’re holding.* *Raccoons eat almost anything: berries, fruits, nuts, insects, fish, and even bird eggs. They're nocturnal, which means they go out at night to look for food and sleep during the day in cozy tree hollows.*
*Raccoons are very social. Young raccoons love to play—tumbling in the grass, hiding behind trees, and exploring everything around them. And sometimes, if they feel safe, raccoons might even come closer to where people are—especially if there's a snack nearby!* *Even though they can be a little mischievous, raccoons play an important role in nature. They help spread seeds and keep insect populations in check.* *So next time you see a raccoon, remember: it’s not just a fluffy animal—it’s a real forest explorer!* ***
## Full Walkthrough 1. Let’s take the raccoon article from the previous section. To upload the illustrations into the chat model, we’ll save them to disk first. Later, you can use the resulting folder of images to build an HTML page with animated visuals.
2. Let’s have the multimodal [**gpt-4o**](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) model analyze the image and suggest a prompt for the video:
Python code {% code overflow="wrap" %} ```python from openai import OpenAI import base64 import mimetypes from pathlib import Path base_url = "https://api.aimlapi.com/" api_key = "" # image path (Insert your image file path instead. Images in PNG, JPG, and WebP formats are supported.) file_path = Path("C:/Users/user/Documents/example/images/racoons_0.png") # Detect the MIME type based on file extension mime_type, _ = mimetypes.guess_type(file_path) # Supported image formats allowed_mime_types = {"image/png", "image/jpeg", "image/webp"} # Raise an error if the format is not supported if mime_type not in allowed_mime_types: raise ValueError(f"Unsupported image format: {mime_type}. Supported formats: PNG, JPG, WebP.") # Read and encode the image in base64 with open(file_path, "rb") as image_file: base64_image = base64.b64encode(image_file.read()).decode("utf-8") # Create a data URL for the base64 image image_data_url = f"data:{mime_type};base64,{base64_image}" # Send the image to GPT-4o via OpenAI's API client = OpenAI(api_key=api_key, base_url=base_url) completion = client.chat.completions.create( model="gpt-4o", messages=[ {"role": "user", "content": "Based on the provided image, come up with a short scenario (no need to output it) and give me only a short, suitable prompt for generating a 5-second animation based on an image with the following description. Do not include the word 'Prompt:' — just output the prompt itself. Describe possible movements, background changes, etc."}, { "role": "user", "content":[ { "type": "image_url", "image_url": { "url": image_data_url } } ] } ], ) image_analysis_result = completion.choices[0].message.content print(image_analysis_result) ``` {% endcode %}
Response: Generated Prompt Based On the Image Description {% code overflow="wrap" %} ``` The raccoon's paw gently ripples the stream as tiny leaves float by; the trees sway slightly in the breeze, and sunlight filters through, casting shifting patterns on the rocks and grass. ``` {% endcode %}
3. Now it's time to generate a short video based on our image and the prompt prepared for us by the chat model in the previous step. We'll use a model [**kling-video/v1.6/pro/image-to-video**](https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-image-to-video) from Kling AI.
Python code {% code overflow="wrap" %} ```python import requests import base64 import mimetypes from pathlib import Path import time base_url = "https://api.aimlapi.com/v2" api_key = "" generated_prompt = "The raccoon's paw gently wash the fruit the stream as tiny leaves float by; the trees sway slightly in the breeze, and sunlight filters through, casting shifting patterns on the rocks and grass." # Insert your image file path instead: file_path = Path("C:/Users/user/Documents/example/images/racoons_0.png") # Detect the MIME type based on file extension mime_type, _ = mimetypes.guess_type(file_path) # Supported image formats allowed_mime_types = {"image/png", "image/jpeg", "image/webp"} # Raise an error if the format is not supported if mime_type not in allowed_mime_types: raise ValueError(f"Unsupported image format: {mime_type}. Supported formats: PNG, JPG, WebP.") # Creating and sending a video generation task to the server def generate_video(im_url): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1/pro/image-to-video", "image_url": im_url, "prompt": generated_prompt, "duration": 5 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Read and encode the image in base64 with open(file_path, "rb") as image_file: base64_image = base64.b64encode(image_file.read()).decode("utf-8") # Create a data URL for the base64 image image_data_url = f"data:{mime_type};base64,{base64_image}" # Generate video gen_response = generate_video(image_data_url) gen_id = gen_response.get("id") print("Gen_ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ```json5 {'id': '9e4c45e7-5785-42f3-8271-ce8a8b31dd04:kling-video/v1.6/pro/image-to-video', 'status': 'queued'} Gen_ID: 9e4c45e7-5785-42f3-8271-ce8a8b31dd04:kling-video/v1.6/pro/image-to-video generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating Still waiting... Checking again in 10 seconds. generating ... generating Still waiting... Checking again in 10 seconds. completed Processing complete:\n {'id': '9e4c45e7-5785-42f3-8271-ce8a8b31dd04:kling-video/v1.6/pro/image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/kangaroo/Kx8BCNAB0eqhasWyZMTo3_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 11725406}} ``` {% endcode %}
4. We've generated two videos and will now convert them into GIF animations using [a free third-party web service](https://ezgif.com/video-to-gif/), for easier playback on a web page. We'll also reduce the frame rate and size to ensure smoother playback. We'll save the resulting GIF files in the same folder, using the same names as the original PNGs.
5. You can also ask any chat model (e.g., [**gpt-4o**](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o)) to generate a web page with the original text and the GIF animations placed in the same spots as the original illustrations. ## Results
Animated Article Example *** ***What Are Raccoons?*** *Raccoons are small, furry animals with fluffy striped tails and black “masks” around their eyes. They live in forests, near rivers and lakes—and sometimes even close to people in towns and cities. Raccoons are very clever, curious, and quick with their paws.*
*One of the raccoon's most famous habits is "washing" its food. But raccoons aren’t really cleaning their meals. They just love to roll and rub things between their paws, especially near water. Scientists believe this helps them understand what they’re holding.* *Raccoons eat almost anything: berries, fruits, nuts, insects, fish, and even bird eggs. They're nocturnal, which means they go out at night to look for food and sleep during the day in cozy tree hollows.*
*Raccoons are very social. Young raccoons love to play—tumbling in the grass, hiding behind trees, and exploring everything around them. And sometimes, if they feel safe, raccoons might even come closer to where people are—especially if there's a snack nearby!* *Even though they can be a little mischievous, raccoons play an important role in nature. They help spread seeds and keep insect populations in check.* *So next time you see a raccoon, remember: it’s not just a fluffy animal—it’s a real forest explorer!* ***
## Room for Improvement Of course, the goal is to automate the process as much as possible — and to make the images look more natural and visually appealing: * Generate looping videos to make sure the animated illustrations move smoothly. * Simply pass a page URL or document to the program and get back a local webpage with animations. * Add logic to skip images below a certain size, to avoid animating icons, logos, or other minor elements. * Support a wider range of image formats. * Automate GIF conversion from video directly within the program. [^1]: **Multimodal** AI models can understand or generate different types of data—like text, images, audio, or video—within a single system. They combine multiple input types to better understand context and respond more intelligently. --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthracite.md # Anthracite - [magnum-v4](/api-references/text-models-llm/anthracite/magnum-v4.md) --- # Source: https://docs.aimlapi.com/capabilities/anthropic.md # Source: https://docs.aimlapi.com/api-references/embedding-models/anthropic.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic.md # Anthropic - [Claude 3 Haiku](/api-references/text-models-llm/anthropic/claude-3-haiku.md) - [Claude 3 Opus](/api-references/text-models-llm/anthropic/claude-3-opus.md) - [Claude 3.5 Haiku](/api-references/text-models-llm/anthropic/claude-3.5-haiku.md) - [Claude 3.7 Sonnet](/api-references/text-models-llm/anthropic/claude-3.7-sonnet.md) - [Claude 4 Opus](/api-references/text-models-llm/anthropic/claude-4-opus.md) - [Claude 4 Sonnet](/api-references/text-models-llm/anthropic/claude-4-sonnet.md) - [Claude 4.1 Opus](/api-references/text-models-llm/anthropic/claude-opus-4.1.md) - [Claude 4.5 Sonnet](/api-references/text-models-llm/anthropic/claude-4-5-sonnet.md) - [Claude 4.5 Haiku](/api-references/text-models-llm/anthropic/claude-4.5-haiku.md) - [Claude 4.5 Opus](/api-references/text-models-llm/anthropic/claude-4.5-opus.md) --- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/assembly-ai.md # Assembly AI - [slam-1](/api-references/speech-models/speech-to-text/assembly-ai/slam-1.md) - [universal](/api-references/speech-models/speech-to-text/assembly-ai/universal.md) --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/deepgram/aura-2.md # aura 2
This documentation is valid for the following list of our models
#g1_aura-2-amalthea-enTry in Playground
#g1_aura-2-andromeda-enTry in Playground
#g1_aura-2-apollo-enTry in Playground
#g1_aura-2-arcas-enTry in Playground
#g1_aura-2-aries-enTry in Playground
#g1_aura-2-asteria-enTry in Playground
#g1_aura-2-athena-enTry in Playground
#g1_aura-2-atlas-enTry in Playground
#g1_aura-2-aurora-enTry in Playground
#g1_aura-2-cora-enTry in Playground
#g1_aura-2-cordelia-enTry in Playground
#g1_aura-2-delia-enTry in Playground
#g1_aura-2-draco-enTry in Playground
#g1_aura-2-electra-enTry in Playground
#g1_aura-2-harmonia-enTry in Playground
#g1_aura-2-helena-enTry in Playground
#g1_aura-2-hera-enTry in Playground
#g1_aura-2-hermes-enTry in Playground
#g1_aura-2-hyperion-enTry in Playground
#g1_aura-2-iris-enTry in Playground
#g1_aura-2-janus-enTry in Playground
#g1_aura-2-juno-enTry in Playground
#g1_aura-2-jupiter-enTry in Playground
#g1_aura-2-luna-enTry in Playground
#g1_aura-2-mars-enTry in Playground
#g1_aura-2-minerva-enTry in Playground
#g1_aura-2-neptune-enTry in Playground
#g1_aura-2-odysseus-enTry in Playground
#g1_aura-2-ophelia-enTry in Playground
#g1_aura-2-orion-enTry in Playground
#g1_aura-2-orpheus-enTry in Playground
#g1_aura-2-pandora-enTry in Playground
#g1_aura-2-phoebe-enTry in Playground
#g1_aura-2-pluto-enTry in Playground
#g1_aura-2-saturn-enTry in Playground
#g1_aura-2-selene-enTry in Playground
#g1_aura-2-thalia-enTry in Playground
#g1_aura-2-theia-enTry in Playground
#g1_aura-2-vesta-enTry in Playground
#g1_aura-2-zeus-enTry in Playground
#g1_aura-2-celeste-esTry in Playground
#g1_aura-2-estrella-esTry in Playground
#g1_aura-2-nestor-esTry in Playground
## Model Overview Aura 2 produces natural, human-like speech with accurate domain-specific pronunciation — covering drug names, legal terms, alphanumeric strings, and structured inputs such as dates, times, and currency. It also maintains sub-200 ms TTFB latency and offers cost-efficient scalability. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["#g1_aura-2-amalthea-en"]},"text":{"type":"string","description":"The text content to be converted to speech."},"container":{"type":"string","description":"The file format wrapper for the output audio. The available options depend on the encoding type."},"encoding":{"type":"string","enum":["linear16","mulaw","alaw","mp3","opus","flac","aac"],"default":"linear16","description":"Specifies the expected encoding of your audio output"},"sample_rate":{"type":"string","description":"Audio sample rate in Hz."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AI/ML API key instead of : "Authorization": "Bearer ", } payload = { "model": "#g1_aura-2-helena-en", "text": ''' Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. ''' } response = requests.post(url, headers=headers, json=payload, stream=True) # result = os.path.join(os.path.dirname(__file__), "audio.wav") # if you run this code as a .py file result = "audio.wav" # if you run this code in Jupyter Notebook with open(result, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", result) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const fs = require("fs"); // Insert your AI/ML API key instead of : const apiKey = ""; const data = JSON.stringify({ model: "#g1_aura-2-helena-en", text: ` Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. ` }); const options = { hostname: "api.aimlapi.com", path: "/v1/tts", method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), } }; const req = https.request(options, (res) => { if (res.statusCode >= 400) { let error = ""; res.on("data", chunk => error += chunk); res.on("end", () => { console.error(`Error ${res.statusCode}:`, error); }); return; } const file = fs.createWriteStream("audio.wav"); res.pipe(file); file.on("finish", () => { file.close(); console.log("Audio saved to audio.wav"); }); }); req.on("error", (e) => { console.error("Request error:", e); }); req.write(data); req.end(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response ``` Audio saved to: audio.wav ```
The generated audio: {% embed url="" %} *** --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/deepgram/aura.md # aura
This documentation is valid for the following list of our models
#g1_aura-angus-enTry in Playground
#g1_aura-arcas-enTry in Playground
#g1_aura-asteria-enTry in Playground
#g1_aura-athena-enTry in Playground
#g1_aura-helios-enTry in Playground
#g1_aura-hera-enTry in Playground
#g1_aura-luna-enTry in Playground
#g1_aura-orion-enTry in Playground
#g1_aura-orpheus-enTry in Playground
#g1_aura-perseus-enTry in Playground
#g1_aura-stella-enTry in Playground
#g1_aura-zeus-enTry in Playground
## Model Overview Deepgram Aura is the first text-to-speech (TTS) AI model designed for real-time, conversational AI agents and applications. It delivers human-like voice quality with unparalleled speed and efficiency. It has dozen natural, human-like voices with lower latency than any comparable voice AI alternative and supports seamless integration with Deepgram's industry-leading Nova speech-to-text API. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["#g1_aura-asteria-en"]},"text":{"type":"string","description":"The text content to be converted to speech."},"container":{"type":"string","description":"The file format wrapper for the output audio. The available options depend on the encoding type."},"encoding":{"type":"string","enum":["linear16","mulaw","alaw","mp3","opus","flac","aac"],"default":"linear16","description":"Specifies the expected encoding of your audio output"},"sample_rate":{"type":"string","description":"Audio sample rate in Hz."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AI/ML API key instead of : "Authorization": "Bearer ", } payload = { "model": "#g1_aura-athena-en", "text": ''' Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. ''' } response = requests.post(url, headers=headers, json=payload, stream=True) # result = os.path.join(os.path.dirname(__file__), "audio.wav") # if you run this code as a .py file result = "audio.wav" # if you run this code in Jupyter Notebook with open(result, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", result) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const fs = require("fs"); // Insert your AI/ML API key instead of : const apiKey = ""; const data = JSON.stringify({ model: "#g1_aura-athena-en", text: ` Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. ` }); const options = { hostname: "api.aimlapi.com", path: "/v1/tts", method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), } }; const req = https.request(options, (res) => { if (res.statusCode >= 400) { let error = ""; res.on("data", chunk => error += chunk); res.on("end", () => { console.error(`Error ${res.statusCode}:`, error); }); return; } const file = fs.createWriteStream("audio.wav"); res.pipe(file); file.on("finish", () => { file.close(); console.log("Audio saved to audio.wav"); }); }); req.on("error", (e) => { console.error("Request error:", e); }); req.write(data); req.end(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response ``` Audio saved to: audio.wav ```
{% embed url="" %} --- # Source: https://docs.aimlapi.com/integrations/autogpt.md # AutoGPT ## About AutoGPT is an open-source platform designed to help you build, test, and run AI agents using a no-code visual interface. It allows users to link LLMs with tools, memory, planning modules, and action chains. By configuring block-based workflows, you can create custom agents that reason, plan, and act in multi-step environments. In this guide, you'll learn how to connect AutoGPT with high-performance language models from AI/ML API for use in AI-driven text generation tasks. *** ## Prerequisites Before proceeding, ensure: * You’ve followed [AutoGPT’s Platform Setup Guide](https://docs.agpt.co/platform/getting-started/) and AutoGPT is running locally. * You have an **API key** from [AI/ML API](https://aimlapi.com/app/keys). *** ## Step-by-Step Setup ### 🥇 Step 1. Install and Launch AutoGPT Locally
Use the latest official guide published on the AutoGPT documentation site: [AutoGPT Getting Started Guide](https://docs.agpt.co/platform/getting-started/) > 💡 Tip: Always refer to the most recent version of the guide to avoid setup issues or deprecated steps. Make sure you're running AutoGPT on `http://localhost:3000`. ### 🥈 Step 2. Open the Visual Block Builder Before proceeding, make sure you're **logged in** to your AI/ML API account or **create an account** if you haven't already: [aimlapi.com](https://aimlapi.com/app/?utm_source=autogpt\&utm_medium=github\&utm_campaign=integration) Once AutoGPT is running: 🔗 Open: or click **"Build"** from the navigation bar.
> 💡 Tip: This is your no-code playground to configure agents and workflows. *** ### 🥉 Step 3. Click “Blocks” on the Left Sidebar * Find the left panel. * Click the button labeled **"Blocks"**.
This shows you all available functional blocks (including LLMs, tools, memory, etc.) *** ### 🔍 Step 4. Search for “AI Text Generator” In the search bar: * Type: `ai text generator` * Click on the **AI Text Generator** block when it appears.
> 🧠 This block lets you plug in a language model for text completions, prompts, and chat flows. *** ### 🤖 Step 5. Select One of These AIMLAPI Models
Click to configure the block, and in the model selection field choose any of the following **AIMLAPI models**:
ModelGeneration SpeedReasoning Depth and QualityOptimization for Tasks
Qwen/Qwen2.5-72B-Instruct-TurboMediumHighText-based tasks
nvidia/llama-3.1-nemotron-70b-instructMediumHighAnalytics and reasoning
meta-llama/Llama-3.3-70B-Instruct-TurboLowVery highComplex tasks
meta-llama/Meta-Llama-3.1-70B-Instruct-TurboLowVery highDeep reasoning
meta-llama/Llama-3.2-3B-Instruct-TurboHighMediumQuick responses
> 🧹 These models are optimized for high-speed generation with reasoning capabilities. *** ### 🔑 Step 6. Enter Your Prompt and API Credentials In the **AI Text Generator** block: 1. Set **Prompt**: Type any message you want the model to respond to. 2. Set **API Key**: * Enter your AI/ML API key.
> 💡 Get your API key here: [aimlapi.com/app/keys](https://aimlapi.com/app/keys?utm_source=autogpt\&utm_medium=github\&utm_campaign=integration)
*** ### 🎉 Step 7. Done – You’re All Set! Now that you’ve configured the prompt, selected a model, and added your API key — let’s finalize and run your agent in AutoGPT. *** #### ✅ 1. Save your Agent Before running, make sure to save your current block configuration as an agent: 1. Click the **"Save"** button at the top-right of the builder interface. 2. In the popup, enter a name for your agent (e.g., `aimlapi_test_agent`). 3. Click **"Save Agent"** to confirm. > 💡 Saving your agent allows you to reuse it, schedule runs, or chain it into larger workflows with memory, tools, and action blocks.
*** #### ▶️ 2. Run your Agent After saving, you can now launch the agent: 1. Press the **"Run"** button next to your agent on the workspace screen. 2. AutoGPT will trigger the `AI Text Generator` block and initiate a request to the AI/ML API model. > ⏱️ At this point, the system will send your prompt to the selected model and wait for a response.
*** #### 🧾 3. View the Output 1. **Navigate to the "Output" Panel.** At your **AI/ML API block**, locate the **"Output"** panel below\.2. You'll see the response returned by the AI/ML API model. 2. You can copy the result, export it, or pass it into further blocks (like analysis, memory, or a webhook).
*** 🎉 That’s it! Your AutoGPT agent is now generating text using **AI/ML API’s powerful language models**. *** > 💡 You can now expand your agent by chaining the `AI Text Generator` block to: > > * 🔧 **Tools** – call external APIs, perform web scraping, manage files. > * 🧠 **Memory** – store and reuse past interactions for contextual reasoning. > * ⚙️ **Actions / Chains** – create complex behavior flows and intelligent pipelines. ## More For further information about the framework, please check [the official AutoGPT documentation](https://docs.agpt.co/platform/getting-started/). --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/avatar-pro.md # avatar-pro {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/avatar-pro` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} From a single image and a voice track, this model generates expressive character animations aligned with the speech’s rhythm, intonation, and meaning. This version outputs 1080p video at 48 fps. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can create a video with this API by providing a reference image of a character and an audio file. The character will deliver the audio with full lip-sync and natural gestures. This POST request creates and submits a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/avatar-pro"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"audio_url":{"type":"string","description":"The URL of the audio file. Supported formats: MP3, WAV, M4A, AAC. Maximum file size: 5 MB."},"prompt":{"type":"string","maxLength":2500,"description":"The text description of the scene, subject, or action to generate in the video."}},"required":["model","image_url","audio_url"],"title":"klingai/avatar-pro"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/avatar-pro", "image_url": "https://cdn.aimlapi.com/assets/content/office_man.png", "audio_url": "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3", # "prompt": "Frequent bursts of laughter and exaggerated, over-the-top hand gestures." } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "klingai/avatar-pro", image_url: "https://cdn.aimlapi.com/assets/content/office_man.png", audio_url: "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3", // prompt: 'Frequent bursts of laughter and exaggerated, over-the-top hand gestures.', }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '2baf5bc0-9565-4ff1-a775-d66ce749164c:klingai/avatar-pro', 'status': 'queued', 'meta': {'usage': {'tokens_used': 1207500}}} Generation ID: 2baf5bc0-9565-4ff1-a775-d66ce749164c:klingai/avatar-pro Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. ... Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '2baf5bc0-9565-4ff1-a775-d66ce749164c:klingai/avatar-pro', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/zebra/V7k8qsL07DkjMkZO-8Ogt_output.mp4'}} ``` {% endcode %}
**Generation time:** \~ 8 min 50 s. **Original** (1920x1080, with sound): {% embed url="" %} The following video was generated by adding just one line to our example: {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python "prompt": "Frequent bursts of laughter and exaggerated, over-the-top hand gestures." ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```python prompt: 'Frequent bursts of laughter and exaggerated, over-the-top hand gestures.', ``` {% endcode %} {% endtab %} {% endtabs %} See how dramatically the `prompt` parameter can change the character’s behavior and mannerisms: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/avatar-standard.md # avatar-standard {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/avatar-standard` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} From a single image and a voice track, this model generates expressive character animations aligned with the speech’s rhythm, intonation, and meaning. This version outputs 720p video at 24 fps. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can create a video with this API by providing a reference image of a character and an audio file. The character will deliver the audio with full lip-sync and natural gestures. This POST request creates and submits a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/avatar-standard"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"audio_url":{"type":"string","description":"The URL of the audio file. Supported formats: MP3, WAV, M4A, AAC. Maximum file size: 5 MB."},"prompt":{"type":"string","maxLength":2500,"description":"The text description of the scene, subject, or action to generate in the video."}},"required":["model","image_url","audio_url"],"title":"klingai/avatar-standard"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/avatar-standard", "image_url": "https://cdn.aimlapi.com/assets/content/office_man.png", "audio_url": "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3", # "prompt": "A person speaking playfully, laughing frequently and gesturing wildly." } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "klingai/avatar-standard", image_url: "https://cdn.aimlapi.com/assets/content/office_man.png", audio_url: "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3", // prompt: 'A person speaking playfully, laughing frequently and gesturing wildly.', }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '76a3553b-7c4f-4fe1-9682-72008ef3a0fe:klingai/avatar-standard', 'status': 'queued', 'meta': {'usage': {'tokens_used': 590100}}} Generation ID: 76a3553b-7c4f-4fe1-9682-72008ef3a0fe:klingai/avatar-standard Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '76a3553b-7c4f-4fe1-9682-72008ef3a0fe:klingai/avatar-standard', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/zebra/-pjJHb89XFkYTPOQjb5G2_output.mp4'}} ``` {% endcode %}
**Generation time:** \~ 4 min. **Original** (1280x720, with sound): {% embed url="" %} The following video was generated by adding just one line to our example: {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python "prompt": "A person speaking playfully, laughing frequently and gesturing wildly." ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```python prompt: 'A person speaking playfully, laughing frequently and gesturing wildly.', ``` {% endcode %} {% endtab %} {% endtabs %} See how dramatically the `prompt` parameter can change the character’s behavior and mannerisms: {% embed url="" %} `"prompt": "A person speaking playfully, laughing frequently and gesturing wildly."` {% endembed %} --- # Source: https://docs.aimlapi.com/api-references/embedding-models/baai.md # BAAI - [bge-base-en](/api-references/embedding-models/baai/bge-base-en.md) - [bge-large-en](/api-references/embedding-models/baai/bge-large-en.md) --- # Source: https://docs.aimlapi.com/solutions/bagoodex.md # Bagoodex - [AI Search Engine](/solutions/bagoodex/ai-search-engine.md): Description, API schema, and usage examples of the specialized solution — AI Search Engine. - [Find Links](/solutions/bagoodex/ai-search-engine/find-links.md) - [Find Images](/solutions/bagoodex/ai-search-engine/find-images.md) - [Find Videos](/solutions/bagoodex/ai-search-engine/find-videos.md) - [Find the Weather](/solutions/bagoodex/ai-search-engine/find-the-weather.md) - [Find a Local Map](/solutions/bagoodex/ai-search-engine/find-a-local-map.md) - [Get a Knowledge Structure](/solutions/bagoodex/ai-search-engine/get-a-knowledge-structure.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu.md # Baidu - [ernie-4.5-8k-preview](/api-references/text-models-llm/baidu/ernie-4.5-8k-preview.md) - [ernie-4.5-0.3b](/api-references/text-models-llm/baidu/ernie-4.5-0.3b.md) - [ernie-4.5-21b-a3b](/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b.md) - [ernie-4.5-21b-a3b-thinking](/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b-thinking.md) - [ernie-4.5-vl-28b-a3b](/api-references/text-models-llm/baidu/ernie-4.5-vl-28b-a3b.md) - [ernie-4.5-vl-424b-a47b](/api-references/text-models-llm/baidu/ernie-4.5-vl-424b-a47b.md) - [ernie-4.5-300b-a47b](/api-references/text-models-llm/baidu/ernie-4.5-300b-a47b.md) - [ernie-4.5-300b-a47b-paddle](/api-references/text-models-llm/baidu/ernie-4.5-300b-a47b-paddle.md) - [ernie-4.5-turbo-128k](/api-references/text-models-llm/baidu/ernie-4.5-turbo-128k.md) - [ernie-4.5-turbo-vl-32k](/api-references/text-models-llm/baidu/ernie-4.5-turbo-vl-32k.md) - [ernie-5.0-thinking-preview](/api-references/text-models-llm/baidu/ernie-5.0-thinking-preview.md) - [ernie-5.0-thinking-latest](/api-references/text-models-llm/baidu/ernie-5.0-thinking-latest.md) - [ernie-x1-turbo-32k](/api-references/text-models-llm/baidu/ernie-x1-turbo-32k.md) - [ernie-x1.1-preview](/api-references/text-models-llm/baidu/ernie-x1.1-preview.md) --- # Source: https://docs.aimlapi.com/capabilities/batch-processing.md # Batch Processing Batch processing (batching) allows you to send multiple message requests in a single batch and retrieve the results later (within up to 1 hour). The main goals are to reduce costs by up to 50% and increase throughput for analytical or offline workloads. To use batch processing, several endpoints are available:
Create a message batch https://api.aimlapi.com/batches
Get status or results of a specific message batch https://api.aimlapi.com/batches?batch_id={batch_id}
Cancel a specific message batch https://api.aimlapi.com/batches/cancel/{batch_id}
{% hint style="success" %} You can find the list of supported models in the first API schema below — see the allowed values under `requests` > `params` > `model`. {% endhint %} *** ## Create a message batch ## Create a message batch > Create a batch of messages for asynchronous processing. All usage is charged at 50% of the standard API prices. ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Llm.v2.CreateBatchDTO":{"type":"object","properties":{"requests":{"type":"array","items":{"type":"object","properties":{"custom_id":{"type":"string"},"params":{"type":"object","properties":{"model":{"type":"string","enum":["claude-opus-4-5-20251101","claude-opus-4-1-20250805","claude-opus-4-20250514","claude-sonnet-4-5-20250929","claude-sonnet-4-20250514","claude-3-7-sonnet-20250219","claude-3-5-haiku-20241022","claude-3-haiku-20240307"]},"max_tokens":{"type":"number","minimum":1,"default":1024},"messages":{"type":"array","items":{"nullable":true}},"metadata":{"type":"object","additionalProperties":{"type":"string"}},"stop_sequences":{"type":"array","items":{"type":"string"}},"system":{"type":"string"},"temperature":{"type":"number","minimum":0,"maximum":1,"default":1},"tool_choice":{"nullable":true},"tools":{"type":"array","items":{"nullable":true}},"top_k":{"type":"number"},"top_p":{"type":"number"},"thinking":{"type":"object","properties":{"budget_tokens":{"type":"integer","minimum":1024},"type":{"type":"string","enum":["enabled"]}},"required":["budget_tokens","type"]}},"required":["model","messages"]}},"required":["custom_id","params"]},"minItems":1,"maxItems":100000}},"required":["requests"]}}},"paths":{"/v1/batches":{"post":{"operationId":"ChatBatchesController_createBatch_v1","summary":"Create a message batch","description":"Create a batch of messages for asynchronous processing. All usage is charged at 50% of the standard API prices.","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Llm.v2.CreateBatchDTO"}}}},"responses":{"201":{"description":""}},"tags":["Chat Completions"]}}}} ```
Code Example (Python) {% code overflow="wrap" %} ```python import requests import json # Insert your AIML API Key instead of
Response ```json5 Batch created: { "id": "msgbatch_01AbYVLPKqi8HuSe6sFJV7ZP", "type": "message_batch", "processing_status": "in_progress", "request_counts": { "processing": 3, "succeeded": 0, "errored": 0, "canceled": 0, "expired": 0 }, "ended_at": null, "created_at": "2025-10-24T13:16:06.070587+00:00", "expires_at": "2025-10-25T13:16:06.070587+00:00", "cancel_initiated_at": null, "results_url": null } ```
## Get status or results of a specific message batch ## Get batch status or results > Get batch status if in progress, or stream results if completed in JSONL format. ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}}},"paths":{"/v1/batches":{"get":{"operationId":"ChatBatchesController_iterateBatchResults_v1","summary":"Get batch status or results","description":"Get batch status if in progress, or stream results if completed in JSONL format.","parameters":[{"name":"batch_id","required":true,"in":"query","description":"The ID of the batch to get results for","schema":{"type":"string"}}],"responses":{"200":{"description":""}},"tags":["Chat Completions"]}}}} ```
Code Example (Python) {% code overflow="wrap" %} ```python import requests import json # Insert your AIML API Key instead of API_KEY = "" BASE_URL = "https://api.aimlapi.com/v1" # Insert your batch_id here batch_id = "msgbatch_01TDVirzmjyZ51WZGyU3uMeY" headers = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json" } response = requests.get(f"{BASE_URL}/batches?batch_id={batch_id}", headers=headers) print("Raw response:\n", response.text[:500]) try: data = [json.loads(line) for line in response.text.splitlines() if line.strip()] print("\n✅ Parsed JSONL:") print(json.dumps(data, indent=2)) except json.JSONDecodeError: try: data = response.json() print("\n✅ Parsed JSON:") print(json.dumps(data, indent=2)) except Exception as e: print("\n⚠️ Could not parse response:", e) ``` {% endcode %}
Response ````json5 Raw response: {"custom_id":"test-01","result":{"type":"succeeded","message":{"model":"claude-3-5-haiku-20241022","id":"msg_01XQUp3SKD1iGNcppVbxSUgE","type":"message","role":"assistant","content":[{"type":"text","text":"To learn NestJS effectively, follow these steps:\n\n1. Prerequisites\n```bash\n- Basic JavaScript/TypeScript knowledge\n- Node.js installed\n- npm (Node Package Manager)\n```\n\n2. Basic Setup\n```bash\n# Install NestJS CLI globally\nnpm i -g @nestjs/cli\n\n# Create a new project\nnest new proj ✅ Parsed JSONL: [ { "custom_id": "test-01", "result": { "type": "succeeded", "message": { "model": "claude-3-5-haiku-20241022", "id": "msg_01XQUp3SKD1iGNcppVbxSUgE", "type": "message", "role": "assistant", "content": [ { "type": "text", "text": "To learn NestJS effectively, follow these steps:\n\n1. Prerequisites\n```bash\n- Basic JavaScript/TypeScript knowledge\n- Node.js installed\n- npm (Node Package Manager)\n```\n\n2. Basic Setup\n```bash\n# Install NestJS CLI globally\nnpm i -g @nestjs/cli\n\n# Create a new project\nnest new project-name\n\n# Navigate to project directory\ncd project-name\n\n# Run the application\nnpm run start\n```\n\n3. Core Concepts to Learn\n\na) Modules\n```typescript\n@Module({\n controllers: [],\n providers: [],\n imports: []\n})\nexport class AppModule {}\n```\n\nb) Controllers\n```typescript\n@Controller('users')\nexport class UsersController {\n @Get()\n findAll() {\n return 'List of users';\n }\n\n @Post()\n create(@Body() createUserDto: CreateUserDto) {\n return 'Create user';\n }\n}\n```\n\nc) Services\n```typescript\n@Injectable()\nexport class UsersService {\n findAll() {\n return ['user1', 'user2'];\n }\n\n create(user) {\n // Create user logic\n }\n}\n```\n\nd) Dependency Injection\n```typescript\n@Controller('users')\nexport class UsersController {\n constructor(private usersService: UsersService) {}\n}\n```\n\n4. Learn Key Decorators\n```typescript\n// Common decorators\n@Module()\n@Controller()\n@Injectable()\n@Get()\n@Post()\n@Put()\n@Delete()\n@Param()\n@Body()\n@Query()\n```\n\n5. Understanding Middleware\n```typescript\n@Injectable()\nexport class LoggerMiddleware implements NestMiddleware {\n use(req: Request, res: Response, next: NextFunction) {\n console.log('Request...');\n next();\n }\n}\n```\n\n6. Validation\n```bash\n# Install class-validator\nnpm i class-validator class-transformer\n```\n\n```typescript\nexport class CreateUserDto {\n @IsNotEmpty()\n @IsString()\n name: string;\n\n @IsEmail()\n email: string;\n}\n```\n\n7. Database Integration\n```bash\n# For TypeORM\nnpm i @nestjs/typeorm typeorm postgres\n```\n\n8. Authentication\n```bash\n# Install passport\nnpm i @nestjs/passport passport passport-local\n```\n\n9. Learning Resources\n- Official NestJS Documentation\n- YouTube Tutorials\n- Udemy Courses\n- GitHub Example Projects\n\n10. Practice Projects\n- REST API\n- Authentication System\n- CRUD Application\n- Real-time Chat Application\n\n11. Advanced Topics\n- Microservices\n- GraphQL\n- WebSockets\n- Caching\n- Task Scheduling\n\n12. Best Practices\n- Use DTOs\n- Implement proper error handling\n- Use dependency injection\n- Follow SOLID principles\n- Write unit and integration tests\n\n13. Recommended Learning Path\na) Learn TypeScript fundamentals\nb) Understand NestJS core concepts\nc) Build simple REST API\nd) Add authentication\ne) Integrate database\nf) Implement more complex features\n\n14. Sample Project Structure\n```\nsrc/\n\u251c\u2500\u2500 users/\n\u2502 \u251c\u2500\u2500 dto/\n\u2502 \u251c\u2500\u2500 entities/\n\u2502 \u251c\u2500\u2500 users.controller.ts\n\u2502 \u251c\u2500\u2500 users.service.ts\n\u2502 \u2514\u2500\u2500 users.module.ts\n\u251c\u2500\u2500 app.module.ts\n\u2514\u2500\u2500 main.ts\n```\n\n15. Additional Tools\n- Swagger for API documentation\n- Jest for testing\n- Docker for containerization\n\nCode Example (Complete User Module):\n```typescript\n// user.dto.ts\nexport class CreateUserDto {\n @IsNotEmpty()\n username: string;\n\n @IsEmail()\n email: string;\n}\n\n// user.entity.ts\n@Entity()\nexport class User {\n @PrimaryGeneratedColumn()\n id: number;\n\n @Column()\n username: string;\n\n @Column()\n email: string;\n}\n\n// user.service.ts\n@Injectable()\nexport class UserService {\n constructor(\n @InjectRepository(User)\n private userRepository: Repository\n ) {}\n\n async create(createUserDto: CreateUserDto) {\n const user = this.userRepository.create(createUser" } ], "stop_reason": "max_tokens", "stop_sequence": null, "usage": { "input_tokens": 13, "cache_creation_input_tokens": 0, "cache_read_input_tokens": 0, "cache_creation": { "ephemeral_5m_input_tokens": 0, "ephemeral_1h_input_tokens": 0 }, "output_tokens": 1024, "service_tier": "batch" } } } }, { "custom_id": "test-02", "result": { "type": "succeeded", "message": { "model": "claude-3-5-haiku-20241022", "id": "msg_01SK4vLuzho25MPU3WKMy6B5", "type": "message", "role": "assistant", "content": [ { "type": "text", "text": "Here's a comprehensive guide to learning React.js:\n\n1. Prerequisites\n- HTML, CSS, JavaScript fundamentals\n- ES6+ JavaScript features\n- Basic understanding of web development\n\n2. Learning Path\na) Official Documentation\n- React official docs (reactjs.org)\n- Very comprehensive and well-structured\n\nb) Online Learning Resources\n- freeCodeCamp\n- Codecademy\n- Udemy courses\n- YouTube tutorials\n- Coursera\n- Scrimba React course\n\nc) Key Learning Steps\n1. Basic React concepts\n- Components\n- JSX\n- Props\n- State\n- Hooks\n- Lifecycle methods\n\n2. State management\n- useState\n- useReducer\n- Context API\n- Redux\n\n3. Routing\n- React Router\n- Navigation between pages\n\n4. Advanced concepts\n- Custom hooks\n- Performance optimization\n- Code splitting\n- Server-side rendering\n\n5. Practical Projects\n- Todo list\n- Weather app\n- E-commerce platform\n- Social media clone\n\n3. Learning Strategies\n- Hands-on coding\n- Build real projects\n- Join developer communities\n- Practice consistently\n- Follow best practices\n- Read documentation\n\n4. Recommended Learning Resources\n- Official React Documentation\n- React.js GitHub repository\n- YouTube channels\n- Coding bootcamps\n- Stack Overflow\n- GitHub projects\n\n5. Tools & Libraries\n- Create React App\n- Next.js\n- TypeScript\n- Styled-components\n- Material-UI\n- Chakra UI\n\n6. Practice Platforms\n- CodePen\n- CodeSandbox\n- GitHub\n- Personal projects\n\n7. Learning Timeline\n- Basics: 2-4 weeks\n- Intermediate: 2-3 months\n- Advanced: 6-12 months\n\nPro Tips:\n- Start small\n- Be patient\n- Code regularly\n- Learn from mistakes\n- Experiment\n- Join developer communities\n\nSample Basic React Component:\n```javascript\nimport React, { useState } from 'react';\n\nfunction Counter() {\n const [count, setCount] = useState(0);\n\n return (\n
\n

Count: {count}

\n \n
\n );\n}\n\nexport default Counter;\n```\n\n8. Recommended Learning Path\nWeek 1-2: Basics\n- Components\n- JSX\n- Props\n- State basics\n\nWeek 3-4: Hooks\n- useState\n- useEffect\n- useContext\n- Custom hooks\n\nWeek 5-6: Advanced Concepts\n- Routing\n- State management\n- API integration\n\nWeek 7-8: Projects\n- Build multiple small projects\n- Practice implementations\n\n9. Additional Skills\n- TypeScript\n- GraphQL\n- Testing (Jest)\n- Next.js\n- Styling libraries\n\n10. Job Preparation\n- Build portfolio\n- Contribute to open-source\n- Practice interview questions\n- Learn related technologies\n\nRemember: Consistent practice and building projects are key to mastering React.js!" } ], "stop_reason": "end_turn", "stop_sequence": null, "usage": { "input_tokens": 14, "cache_creation_input_tokens": 0, "cache_read_input_tokens": 0, "cache_creation": { "ephemeral_5m_input_tokens": 0, "ephemeral_1h_input_tokens": 0 }, "output_tokens": 703, "service_tier": "batch" } } } }, { "custom_id": "test-03", "result": { "type": "succeeded", "message": { "model": "claude-3-5-haiku-20241022", "id": "msg_014zDUfCJKqas9HT4Zg5REH6", "type": "message", "role": "assistant", "content": [ { "type": "text", "text": "Learning Next.js can be done through various resources and approaches. Here's a comprehensive guide to help you learn Next.js:\n\n1. Prerequisites\n- Basic understanding of React\n- JavaScript/TypeScript knowledge\n- HTML and CSS fundamentals\n\n2. Official Documentation\n- Start with the official Next.js documentation\n- Website: https://nextjs.org/docs\n- Read through the comprehensive guide and tutorials\n\n3. Online Courses\n- Udemy courses\n- Coursera\n- Pluralsight\n- YouTube tutorials\n- Traversy Media\n- Net Ninja\n- Web Dev Simplified\n\n4. Free Learning Resources\n- Next.js official tutorial\n- YouTube tutorials\n- FreeCodeCamp\n- Vercel's learning platform\n\n5. Learning Path\na. Understand React basics\nb. Learn Next.js core concepts:\n- Routing\n- Pages\n- API routes\n- Server-side rendering\n- Static site generation\n- Dynamic routing\n- Image optimization\n- File-based routing\n\n6. Practical Projects\n- Build small projects\n- Personal portfolio\n- Blog website\n- E-commerce platform\n- Dashboard application\n\n7. Key Topics to Learn\n- React components\n- Pages and layouts\n- Routing\n- API routes\n- Server-side rendering\n- Static site generation\n- Dynamic imports\n- Authentication\n- State management\n- Styling (CSS modules, Tailwind)\n\n8. Practice Platforms\n- CodeSandbox\n- StackBlitz\n- GitHub repositories\n- Personal projects\n\n9. Advanced Concepts\n- TypeScript integration\n- Performance optimization\n- Middleware\n- Authentication strategies\n- State management\n- Testing\n\n10. Recommended Learning Resources\n- Official documentation\n- Next.js GitHub repository\n- Stack Overflow\n- Reddit communities\n- Discord channels\n\n11. Practice Projects Progression\na. Beginner level\n- Simple static website\n- Personal blog\n- Todo application\n\nb. Intermediate level\n- E-commerce platform\n- Social media clone\n- Dashboard application\n\nc. Advanced level\n- Full-stack application\n- Real-time collaborative tools\n- Complex web applications\n\n12. Tools and Libraries\n- TypeScript\n- Tailwind CSS\n- Redux/Zustand\n- Prisma\n- tRPC\n- Chakra UI\n- React Query\n\n13. Deployment Platforms\n- Vercel\n- Netlify\n- Heroku\n- DigitalOcean\n- AWS\n\n14. Learning Strategy\n- Consistent practice\n- Build projects\n- Read documentation\n- Follow tutorials\n- Engage in community\n- Solve coding challenges\n\n15. Additional Tips\n- Follow Next.js creators on Twitter\n- Join Discord communities\n- Attend webinars\n- Contribute to open-source projects\n- Read tech blogs\n\n16. Recommended Books\n- Next.js Quick Start Guide\n- Full-stack React with Next.js\n- Professional Next.js\n\n17. YouTube Channels\n- Vercel\n- Web Dev Simplified\n- Traversy Media\n- Net Ninja\n- Jack Herrington\n\nSample Learning Timeline:\n- Week 1-2: React fundamentals\n- Week 3-4: Next.js basics\n- Week 5-6: Advanced concepts\n- Week 7-8: Project implementation\n\nCode Example (Basic Next.js Page):\n```javascript\n// pages/index.js\nfunction HomePage() {\n return
Welcome to Next.js!
\n}\n\nexport default HomePage\n```\n\nSample Routing:\n```javascript\n// pages/about.js\nfunction AboutPage() {\n return

About Us

\n}\n\nexport default AboutPage\n```\n\nRecommended Learning Sequence:\n1. React fundamentals\n2. Next.js core concepts\n3. Routing\n4. Server-side rendering\n5. API routes\n6. State management\n7. Authentication\n8. Deployment\n\nRecommended GitHub Repositories:\n- https://github.com/vercel/next.js\n- https://github.com/alan2207/nextjs-boilerplate\n- https://github.com/panacloud/bootcamp-2020\n\nRemember:\n- Be consistent\n- Build projects\n- Practice regularly\n- Stay updated with latest features\n\nBy following this comprehensive guide and maintaining a structured learning approach, you can become proficient in Next.js development." } ], "stop_reason": "end_turn", "stop_sequence": null, "usage": { "input_tokens": 14, "cache_creation_input_tokens": 0, "cache_read_input_tokens": 0, "cache_creation": { "ephemeral_5m_input_tokens": 0, "ephemeral_1h_input_tokens": 0 }, "output_tokens": 975, "service_tier": "batch" } } } } ] ````
## Cancel a specific message batch ## Cancel a message batch > Cancel a message batch that is currently in progress. Requests that have already started processing will complete, but no new requests will be processed. ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}}},"paths":{"/v1/batches/cancel/{batch_id}":{"post":{"operationId":"ChatBatchesController_cancelBatch_v1","summary":"Cancel a message batch","description":"Cancel a message batch that is currently in progress. Requests that have already started processing will complete, but no new requests will be processed.","parameters":[{"name":"batch_id","required":true,"in":"path","description":"The ID of the batch to cancel","schema":{"type":"string"}}],"responses":{"201":{"description":""}},"tags":["Chat Completions"]}}}} ```
Code Example (Python) ```python import requests import json # Insert your AIML API Key instead of API_KEY = "" BASE_URL = "https://api.aimlapi.com/v1" # Insert your batch_id here batch_id = "msgbatch_01McVJYhQd3Wiuqrac6y9PrX" headers = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json" } url = f"{BASE_URL}/batches/cancel/{batch_id}" response = requests.post(url, headers=headers) if response.status_code == 200: print("Batch canceled successfully:") data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) else: data = response.json() print(f"Failed to cancel batch ({response.status_code}):") print(json.dumps(data, indent=2, ensure_ascii=False)) ```
Response #1 (successfully cancelled) ```json5 Batch canceled successfully: { "id": "msgbatch_01McVJYhQd3Wiuqrac6y9PrX", "type": "message_batch", "processing_status": "canceling", "request_counts": { "processing": 3, "succeeded": 0, "errored": 0, "canceled": 0, "expired": 0 }, "ended_at": null, "created_at": "2025-10-24T13:49:11.902215+00:00", "expires_at": "2025-10-25T13:49:11.902215+00:00", "cancel_initiated_at": "2025-10-24T13:49:27.756971+00:00", "results_url": null } ```
Response #2 (if already finished) {% code overflow="wrap" %} ```json5 Failed to cancel batch (400): { "requestId": "56277efa-58af-4db7-b45e-ebaa612b2af7", "statusCode": 400, "timestamp": "2025-10-24T13:51:23.801Z", "path": "/v1/batches/cancel/msgbatch_01McVJYhQd3Wiuqrac6y9PrX", "message": "400 {\"type\":\"error\",\"error\":{\"type\":\"invalid_request_error\",\"message\":\"Batch msgbatch_01McVJYhQd3Wiuqrac6y9PrX cannot be canceled because it has already finished processing.\"},\"request_id\":\"req_011CUS8e7LPi44CSuvwhSMsn\"}" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/embedding-models/baai/bge-base-en.md # bge-base-en {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `BAAI/bge-base-en-v1.5` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An embedding model that excels in creating high-precision linguistic representations. It's designed to generate detailed embeddings that capture the subtleties of language, facilitating advanced natural language processing tasks. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [bge-base-en.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-26c21156c401dc4f293df279d9bf199906d615bf%2Fbge-base-en.json?alt=media\&token=5a1c281e-f155-4bf5-857a-b4ff71d91672) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="BAAI/bge-base-en-v1.5"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "BAAI/bge-base-en-v1.5", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.011658690869808197, -0.018578968942165375, 0.012323499657213688, 0.0073761423118412495, 0.06261733919382095, -0.0008231595857068896, 0.0031039523892104626, 0.018022214993834496, -0.05936441570520401, -0.005633850581943989, -0.025196731090545654, 0.029335038736462593, -0.05275379866361618, 0.007178363855928183, -0.012582371011376381, 0.05022329464554787, 0.012615736573934555, -0.03852837532758713, 0.04922296851873398, -0.013275686651468277, 0.02643674612045288, 0.04058190807700157, 0.0013573297765105963, 0.02061430923640728, -0.02698611654341221, -0.020850922912359238, -0.011911595240235329, 0.021286388859152794, -0.03574862331151962, 0.015528105199337006, 0.07527229189872742, -0.06804201006889343, 0.040051039308309555, -0.08631309121847153, 0.011029606685042381, 0.0038828470278531313, 0.055851999670267105, -0.0026011005975306034, -0.01863997057080269, -0.02572443336248398, -0.08648759871721268, -0.014346427284181118, 0.021596228703856468, 0.005607986822724342, -0.0599936842918396, -0.003305061487480998, -0.015417912974953651, 0.051326293498277664, 0.02928423322737217, -0.026041774079203606, -0.04577408730983734, 0.023901036009192467, 0.0357603095471859, 0.05475606769323349, -0.012751741334795952, 0.03616871312260628, -0.004525776486843824, -0.022031307220458984, -0.03058265522122383, -0.01109786331653595, -0.005847369786351919, -0.01874837465584278, 0.003786420449614525, -0.04965910315513611, 0.060039080679416656, 0.02385510504245758, 0.01522880233824253, 0.03449542075395584, -0.004220642149448395, -0.03513180837035179, -0.03691013529896736, 0.007439743261784315, 0.006748793181031942, -0.02299191616475582, -0.06698129326105118, -0.012593718245625496, 0.058502670377492905, -0.025966312736272812, 0.016721542924642563, 0.0600757971405983, 0.034065745770931244, -0.04764077812433243, 0.016392020508646965, 0.06289635598659515, 0.010011296719312668, -0.03453841432929039, -0.037166569381952286, -0.006588813848793507, -0.05419961363077164, 0.03145653009414673, -0.06678452342748642, -0.05227635055780411, 0.04169771447777748, 0.05716203898191452, 0.004991025198251009, 0.029354732483625412, 0.0354110449552536, -0.041608862578868866, 0.0342419408261776, 0.012666147202253342, -0.02381219156086445, -0.0198355782777071, 0.016474245116114616, -0.04728461429476738, -0.059621986001729965, -0.019043199717998505, -0.020126933231949806, 0.018849564716219902, -0.01700529083609581, 0.005782770924270153, 0.013347440399229527, 0.008009222336113453, -0.012010510079562664, -0.006792867556214333, -0.0012243515811860561, 0.02190832421183586, -0.026272114366292953, -0.01806635782122612, 0.021971991285681725, -0.03756976127624512, 0.046954382210969925, 0.024942057207226753, -0.013467652723193169, 0.05566409230232239, -0.02939034253358841, -0.004486191086471081, -0.0420663096010685, 0.06319195032119751, 0.003994333557784557, -0.015539233572781086, 0.045254092663526535, 0.028824059292674065, -0.005584255326539278, 0.011505923233926296, 0.015092678368091583, 0.019955521449446678, 0.02006419748067856, 0.0337817519903183, 0.02504267357289791, 0.013282686471939087, 0.01852530986070633, 0.0019804458133876324, -0.010732145980000496, -0.013852943666279316, -0.0055500902235507965, 0.021653370931744576, -0.00998683925718069, -0.03388819098472595, 0.004470963031053543, 0.007828164845705032, -0.0068223122507333755, 0.0488409623503685, 0.00487419031560421, -0.08021607995033264, 0.04309513419866562, 0.0019645097199827433, -0.009185535833239555, -0.0013155249180272222, -0.04033482447266579, 0.03934170678257942, -0.04436792433261871, 0.013499065302312374, 0.007129736710339785, 0.01721024140715599, 0.03379036486148834, 0.026883987709879875, -0.027901146560907364, 0.07364347577095032, -0.023422986268997192, -0.0019132699817419052, -0.03887537121772766, 0.015111638233065605, -0.004181549418717623, -0.004101315513253212, -0.10936370491981506, 0.03308088332414627, 0.09255659580230713, 0.003848041407763958, 0.01188590470701456, 0.01495142001658678, -0.04235406219959259, 0.0327109768986702, -0.011958586983382702, 0.008720827288925648, -0.0024145329371094704, 0.0008358414052054286, -0.003333167638629675, 0.010237077251076698, 0.0024563176557421684, 0.003806204767897725, -0.04134238883852959, -0.03535868972539902, -0.02361328713595867, 0.003962222021073103, 0.011405540630221367, -0.01352467481046915, -0.008801989257335663, 0.08732294291257858, 0.03357969969511032, 0.017288845032453537, 0.004777946975082159, 0.04029672220349312, -0.00314644118770957, -0.036461833864450455, -0.02734217420220375, 0.013465276919305325, 0.018797509372234344, -0.02859959751367569, -0.007447289768606424, 0.014241132885217667, -0.0006345664151012897, -0.03280339762568474, 0.08878018707036972, 0.01593293435871601, 0.027651024982333183, -0.025776559486985207, -0.01438236702233553, -0.009261127561330795, -0.04037085548043251, 0.0014654992846772075, 0.02597060799598694, 0.03997795656323433, -0.024606656283140182, 0.01760828122496605, -0.011711525730788708, 0.07737505435943604, 0.02062726393342018, -0.020670557394623756, -0.017537551000714302, 0.02612995356321335, -0.007961840368807316, -0.012151774950325489, -0.003072962397709489, -0.005992882885038853, 0.08971364796161652, 0.040506526827812195, -0.002490011043846607, -0.009759923443198204, -0.003256001975387335, -0.047061026096343994, -0.025094516575336456, 0.027257798239588737, -0.010738528333604336, 0.08344828337430954, -0.05279938876628876, -0.01345988642424345, -0.045219000428915024, -0.023960597813129425, 0.01320223044604063, -0.016804419457912445, 0.008912402205169201, -0.0123392753303051, -0.017277240753173828, 0.05045454204082489, -0.03717101365327835, -0.0550372377038002, -0.007294025272130966, -0.00808947067707777, 0.0436876006424427, 0.029658302664756775, 0.024017637595534325, -0.002700371900573373, 0.0232936330139637, -0.01014023832976818, -0.037643108516931534, -0.08409059047698975, -0.028104886412620544, -0.03111935406923294, 0.04546510800719261, -0.006859905086457729, 0.010472486726939678, 0.07629445195198059, 0.029637569561600685, -0.005264559295028448, -0.018116015940904617, 0.03656730428338051, -0.00324469362385571, 0.01844063214957714, 0.025931894779205322, -0.06714964658021927, 0.0015349012101069093, 0.020911550149321556, -0.016625486314296722, -0.05212928354740143, -0.02442360483109951, -0.03819900006055832, 0.0039304024539887905, 0.010794945061206818, -0.026110151782631874, 0.03273048624396324, -0.0398159995675087, 0.022590041160583496, 0.008729050867259502, -0.01432312373071909, 0.04990584775805473, -0.00878953281790018, 0.014817493036389351, 0.003577686147764325, -0.048889484256505966, 0.027168627828359604, -0.032785143703222275, 0.001428293762728572, -0.012991726398468018, 0.002207501558586955, -0.031595587730407715, -0.04929297789931297, 0.07943408191204071, -0.030240967869758606, -0.28902173042297363, 0.02503572776913643, -0.01670674793422222, -0.04970690980553627, 0.010199888609349728, -0.054794687777757645, 0.01522759348154068, -0.050625383853912354, -0.04223717749118805, 0.017970046028494835, -0.0337534174323082, 0.032258693128824234, -0.038969989866018295, 0.03947341442108154, 0.06362343579530716, 0.01295317243784666, -0.03605831041932106, -0.012009094469249249, -0.011229542084038258, 0.04157596826553345, -0.023201780393719673, -0.02855140157043934, -0.019053790718317032, -0.046474337577819824, 0.036980241537094116, 0.01553323119878769, 0.006598281674087048, -0.05002759024500847, -0.04619036242365837, 0.001307979109697044, 0.016306757926940918, -0.014870641753077507, -0.0583050437271595, -0.027423717081546783, 0.028048831969499588, 0.00561688793823123, -0.013109457679092884, -0.060974009335041046, 0.04908984526991844, -0.031463250517845154, -0.029546959325671196, -0.042048774659633636, -0.007008932530879974, 0.005690962076187134, 0.031728338450193405, -0.009632786735892296, -0.027723178267478943, -0.03559422865509987, 0.016514359042048454, 0.07822524756193161, -0.027891991659998894, -0.013297860510647297, 0.025567054748535156, -0.03333411365747452, -0.00414719432592392, 0.02465416118502617, 0.009624198079109192, -0.028551414608955383, -0.01517847552895546, -0.03812381625175476, 0.02285400591790676, -0.021881822496652603, 0.0007843165658414364, -0.02498115599155426, -0.013946430757641792, -0.00893888808786869, 0.011626829393208027, -0.06035604700446129, 0.05703093111515045, 0.035672836005687714, 0.03164256364107132, -0.07518582046031952, 0.003520556027069688, -0.07736840844154358, -0.039008162915706635, -0.0074194250628352165, -0.04513378441333771, -0.030272195115685463, 0.034462280571460724, -0.028183691203594208, 0.02213544026017189, 0.025428684428334236, 0.02164965309202671, -0.005362908821552992, 0.016267089173197746, -0.04896216094493866, 0.056062184274196625, 0.01068164687603712, -0.05329759046435356, 0.012132675386965275, 0.01482552569359541, 0.02420257031917572, -0.014463119208812714, 0.03615114465355873, -0.028405772522091866, 0.023333707824349403, 0.002132097724825144, -0.0029442200902849436, 0.023667745292186737, 0.017944909632205963, 0.04902446269989014, -0.032475341111421585, 0.01893465593457222, 0.008250508457422256, 0.025931477546691895, -0.05746575817465782, -0.06726448237895966, -0.04720583185553551, -0.023415101692080498, -0.03320658579468727, 0.007519441191107035, -0.011187655851244926, 0.06192195042967796, -0.07022793591022491, 0.01893569901585579, -0.07046280056238174, 0.0304004717618227, 0.024508673697710037, -0.017410725355148315, -0.024023018777370453, 0.02454483136534691, 0.012349991127848625, -0.04359012097120285, 0.014789453707635403, -0.048688847571611404, 0.042428791522979736, 0.011397453024983406, 0.0031104108784347773, -0.037755489349365234, 0.032866220921278, -0.039232075214385986, -0.03735247254371643, 0.012811037711799145, -0.052589088678359985, 0.032366082072257996, 0.03369993343949318, -0.014085110276937485, -0.06036509573459625, 0.04668035730719566, 0.005542110651731491, -0.020530635491013527, -1.3337402378965635e-05, -0.0036837924271821976, 0.00015563184570055455, 0.06424855440855026, 0.0019413833506405354, 0.008460842072963715, -0.0013321915175765753, -0.021368030458688736, 0.0005175513215363026, 0.05508086830377579, 0.003703558351844549, 0.016298184171319008, -0.05562125891447067, 0.04376213252544403, 0.0007460187771357596, 0.044157881289720535, -0.01038070023059845, -0.003666197182610631, -0.022915473207831383, -0.015332457609474659, -0.03458123281598091, -0.05164092034101486, 0.006279381923377514, 0.00730145163834095, 0.06235000863671303, -0.0054173278622329235, 0.03942393139004707, -0.019114740192890167, -0.012854578904807568, -0.04528335854411125, -0.028437446802854538, 0.05823083594441414, 0.04027895629405975, 0.015850722789764404, 0.02131357043981552, 0.0014371995348483324, 0.03518809750676155, 0.051746103912591934, 0.053245820105075836, -0.028711413964629173, -0.014230514876544476, -0.047211743891239166, 0.01689412072300911, 0.04366833344101906, 0.01632235012948513, -0.05013277009129524, -0.02260572463274002, 0.046214453876018524, 0.013100503012537956, 0.009009084664285183, 0.04649597406387329, -0.02283347025513649, 0.007127475459128618, -0.04864220693707466, -0.039959296584129333, 0.011588836088776588, 0.07114911824464798, -0.051676250994205475, -0.02821783907711506, -0.03221212700009346, 0.010966045781970024, 0.003503242041915655, 0.04182348772883415, 0.04281967505812645, -0.0666528195142746, 0.05860932171344757, 0.008202419616281986, -0.0290206428617239, 0.036701276898384094, -0.031827524304389954, -0.016598979011178017, 0.05001441761851311, -0.005293721798807383, -0.023273391649127007, -0.021332601085305214, 0.012163229286670685, 0.03855433687567711, 0.03917527571320534, -0.000506638316437602, 0.010299109853804111, -0.026933452114462852, -0.019489889964461327, 0.00023776893795002252, -0.000930268841329962, 0.020109230652451515, -0.06015421822667122, 0.0388009212911129, 0.0036858306266367435, -0.0614473782479763, 0.03543441742658615, -0.0546044297516346, -0.0002424120029900223, -0.00031788364867679775, -0.02662869170308113, 0.016553305089473724, -0.044441912323236465, 0.03416808322072029, 0.02251741848886013, 0.025819605216383934, 0.0725560337305069, -0.03633907064795494, 1.8129039744962938e-05, -0.0005926945596002042, 0.01997777633368969, 0.06660829484462738, -0.03621210530400276, -0.019180625677108765, -0.0015497016720473766, -0.04997091367840767, 0.006882124580442905, 0.03433854505419731, 0.0044527300633490086, -0.005119694862514734, 0.04012249410152435, -0.048189420253038406, -0.02032427489757538, 0.003958383109420538, -0.0470176562666893, 0.006208898965269327, 0.03419186547398567, -0.0039625572971999645, -0.0211471077054739, 0.017989803105592728, 0.05225418135523796, 0.021960897371172905, -0.0090477941557765, -0.02973831631243229, 0.006917226128280163, -0.02460668236017227, 0.019219186156988144, -0.019327064976096153, -0.028160177171230316, -0.040624815970659256, 0.018049148842692375, -0.007872330024838448, 0.0013795916456729174, -0.05496736988425255, 0.02021488919854164, -0.07206983119249344, -0.006986201275140047, 0.011781449429690838, 2.6940986572299153e-05, -0.009080830030143261, 0.014979347586631775, -0.021085567772388458, 0.01631174236536026, -0.011692334897816181, -0.020261332392692566, -0.0005780161009170115, 0.027564585208892822, 0.03675185516476631, 0.04981456696987152, 0.038751695305109024, -0.001692216726951301, 0.0030602680053561926, 0.05946047604084015, 0.03279809281229973, 0.018662715330719948, 0.0010382626205682755, -0.03739776462316513, 0.058492474257946014, -0.004906819202005863, 0.011189996264874935, -0.042336802929639816, -0.03350737690925598, 0.05382155254483223, -0.11734187602996826, 0.010130063630640507, 0.03273265063762665, -0.03998766466975212, -0.024963650852441788, 0.0777893140912056, 0.0044626276940107346, -0.03591876104474068, -0.004831091966480017, 0.04886357858777046, -0.014175692573189735, 0.009981085546314716, 0.0513734370470047, -0.010054119862616062, -0.06900234520435333, 0.03559659793972969, 0.03199708089232445, 0.0762697234749794, 0.020331457257270813, 0.016148889437317848, 0.022265996783971786, 0.01014411635696888, 0.05869985744357109, 0.06161825358867645, 0.038302212953567505, 0.055247604846954346, 0.0547727569937706, -0.03966425731778145, -0.04922478646039963, -0.008416063152253628, 0.032850269228219986, -0.0010125684784725308, -0.02813360095024109, -0.02904607728123665, 0.045110028237104416, 0.02027292549610138, 0.05021657794713974, 0.06494481861591339, -0.04767780005931854, 0.02763231098651886, 0.009192290715873241, 0.000659125973470509, -0.04054149240255356, 0.003907663282006979, 0.033963315188884735, 0.008144555613398552, -0.08116139471530914, 0.07988631725311279, 0.017002783715724945, 0.014132574200630188, 0.01550651341676712, -0.023972097784280777, 0.023776691406965256, -0.044400569051504135, 0.001965914387255907, 0.08696596324443817, 0.019283894449472427, -0.04227995499968529, -0.08769670128822327, -0.002612076234072447, 0.03166748955845833, 0.025596309453248978, 0.002526302821934223, 0.03700852394104004, 0.010600398294627666, -0.052719831466674805, 0.010206802748143673, -0.01595372147858143, -0.043334148824214935, -0.027373841032385826, -0.040282879024744034, 0.0038889567367732525, 0.06101490557193756, 0.017439477145671844, -0.03916575014591217, -0.036308906972408295, -0.016030609607696533, -0.04583678022027016, -0.045579031109809875, -0.014705915004014969, 0.024154217913746834, -0.04914421588182449, 0.0008538658730685711, 0.061218466609716415, -0.01976163499057293, 0.026634687557816505, 0.03310007601976395, -0.033564817160367966, -0.03548832982778549, 0.04763418436050415, 0.0440690740942955, -0.009795410558581352, 0.04135273024439812, 0.06297280639410019, 0.0390808992087841, 0.005284945480525494, 0.0074376920238137245, -0.024793729186058044, 0.042871683835983276, -0.04783324897289276, -0.026509402319788933, -0.05406336486339569, 0.034168824553489685, -0.009172582998871803, 0.006330863572657108, -0.05980753153562546, 0.03882012143731117, 0.012579734437167645, 0.05506054684519768, 0.026283686980605125, -0.0037538025062531233, -0.0005106416065245867, -0.011596854776144028, -0.04689883440732956, 0.01959231123328209, 0.032121509313583374, 0.033550020307302475, -0.01649251952767372, 0.05118260905146599, -0.022494375705718994, -0.014754097908735275, -0.001066056895069778, 0.03409600257873535, 0.037240006029605865, -0.024016207084059715, 0.005816940683871508, -0.057898782193660736, -0.03646031394600868, -0.07734798640012741, -0.04578268527984619, 0.014451026916503906, -0.0029571594204753637, -0.0457887202501297, 0.017878886312246323, 0.008322810754179955, 0.0049087475053966045, -0.02843366749584675, 0.02956739068031311, -0.016335176303982735, -0.04846349358558655, -0.016797350719571114, -0.0617985799908638, -0.03490974381566048, -0.04078895226120949, -0.025277748703956604, 0.00685898819938302, -0.017386544495821, -0.018116246908903122, -0.02631491608917713, 0.005632593296468258, -0.017169492319226265, 0.030786830931901932, -0.023545755073428154], index=0, object='embedding')], model='BAAI/bge-base-en-v1.5', object='list', usage=Usage(prompt_tokens=7, total_tokens=7, completion_tokens=0, prompt_tokens_details=None), id='embd-643b9c23567f410daf9d1f949e8f87f9', created=1768555048, meta={'usage': {'credits_used': 1}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/baai/bge-large-en.md # bge-large-en {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `BAAI/bge-large-en-v1.5` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview BGE-Large-EN-v1.5, standing for Bi-directional Global Embedding, is an advanced language model that provides rich, contextual embeddings for English text. It encodes deep linguistic information, allowing for a comprehensive understanding of text nuances, which is crucial for various natural language processing (NLP) tasks. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [bge-large-en.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-c526d7ea7645984017f09d703c4946d5040dfd9f%2Fbge-large-en.json?alt=media\&token=205c5b74-4cad-4bb0-96c8-0b04ad64f89a) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="BAAI/bge-large-en-v1.5"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "BAAI/bge-large-en-v1.5", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[-0.029522722586989403, 0.062254276126623154, -0.00391138531267643, 0.013663576915860176, -0.032200440764427185, -0.02270970679819584, -0.01390056125819683, 0.03998970240354538, 0.027032114565372467, -0.0012084535555914044, 0.014594324864447117, -0.004159911535680294, 0.010213620960712433, 0.011115002445876598, -0.03424204885959625, -0.010726550593972206, -0.02737782709300518, -0.003479707520455122, -0.04980620741844177, -0.0032292637042701244, 0.0023568749893456697, 0.01143425703048706, -0.06101774424314499, -0.010085036046802998, -0.06003176420927048, 0.02827335335314274, 0.01778051070868969, -0.01880253106355667, 0.06990577280521393, 0.056985631585121155, -0.009732295759022236, -0.04632685333490372, -0.00727028539404273, -0.02271944098174572, -0.013524127192795277, 0.010008606128394604, 0.010165529325604439, -0.04391580447554588, -0.020615067332983017, -0.05555597320199013, 0.014454372227191925, 0.0008113973308354616, 0.010083837434649467, -0.022239794954657555, -0.024760758504271507, 0.015304923988878727, 0.05394122004508972, -0.0169982872903347, 0.040029123425483704, 0.011218860745429993, -0.022496478632092476, -0.0020275257993489504, -0.00981214176863432, -0.016845501959323883, -0.00925981905311346, -0.01922508515417576, -0.023760490119457245, -0.013323926366865635, 0.0334475077688694, 0.025237208232283592, -0.010350138880312443, 0.00458571407943964, 0.0046234410256147385, -0.056592997163534164, 0.02532116323709488, 0.010578532703220844, 0.03549222648143768, -0.021847160533070564, -0.02547185681760311, -0.03671136125922203, -0.014842131175100803, 0.023645522072911263, -0.09998102486133575, -0.02429225668311119, -0.011102784425020218, 0.04762694239616394, -0.036774892359972, 0.005713794380426407, 0.057492174208164215, 0.027922486886382103, 0.0004857970343437046, 0.008038084022700787, -0.009430059231817722, 0.019797751680016518, -0.02781386487185955, -0.022991491481661797, 0.008382855914533138, 0.023664213716983795, 0.01143132895231247, 0.0001652732607908547, 0.02693919837474823, 0.07790271937847137, -0.03165774047374725, -0.010548539459705353, 0.0451447032392025, 0.0028644504491239786, -0.008782245218753815, -0.008623814210295677, 0.026056664064526558, -0.028340300545096397, -0.018393591046333313, 0.009827322326600552, 0.03347485512495041, 0.030976206064224243, -0.02699423022568226, 0.030783070251345634, 0.0003346398880239576, -0.01405976340174675, -0.030830850824713707, -0.07551857084035873, -0.013110504485666752, -0.04160088673233986, 0.0647839605808258, 0.00889752246439457, 0.024090485647320747, 0.05746475234627724, 0.03667256608605385, 0.026477761566638947, -0.02702298015356064, -0.021570511162281036, 0.013353407382965088, 0.03642627224326134, 0.036220189183950424, -0.03564770519733429, 0.007043516729027033, -0.04647684469819069, -0.032963331788778305, 0.04351876303553581, -0.009451177902519703, -0.008452154695987701, -0.008001103065907955, -0.04093235358595848, 0.003499533049762249, -0.001788096153177321, 0.024703320115804672, 0.01899293065071106, -0.02957605943083763, 0.07011335343122482, 0.017282791435718536, -0.01545021403580904, 0.060163721442222595, 0.057065822184085846, 0.012513200752437115, 0.06714717298746109, -0.040733009576797485, 0.028770949691534042, -0.007422795053571463, 0.009439362213015556, -0.04462719336152077, -0.043524887412786484, -0.012206687591969967, 0.03616880625486374, 0.004707029089331627, 0.020733507350087166, 0.018511788919568062, -0.02805466763675213, -0.02025154046714306, -0.008759403601288795, -0.01656501181423664, 0.05147245526313782, -0.0032311915419995785, -0.0041289497166872025, -0.013029356487095356, 0.008326621726155281, 0.009214901365339756, -0.0419512540102005, 0.016409194096922874, -0.04993880167603493, 0.04907597228884697, -0.013095089234411716, 0.025698035955429077, 0.02081691101193428, -0.0093222726136446, -0.017111586406826973, 0.01779896393418312, 0.03665577247738838, 0.05176100134849548, -0.03512326255440712, 0.00909622572362423, 0.019103461876511574, -0.02553418092429638, 0.015183958224952221, 0.009511142037808895, 0.014553618617355824, 0.0383070707321167, -0.0059502520598471165, 0.02544885315001011, -0.026583123952150345, -0.01224718987941742, -0.0017936804797500372, -0.012914389371871948, 0.04561089724302292, -0.020161956548690796, 0.008469115011394024, 0.01140200812369585, 0.011601760983467102, -0.008557690307497978, 0.039230264723300934, -0.013338652439415455, -0.07635688781738281, -0.037040479481220245, 0.03697170689702034, -0.05416393652558327, 0.03272475302219391, -0.000504587369505316, -0.011258579790592194, 0.019344866275787354, 0.05000797659158707, -0.015876132994890213, -0.04546599090099335, 0.07017447799444199, 0.038655366748571396, -0.04131274297833443, -0.037695012986660004, -0.006729803513735533, 0.006960767786949873, 0.01485166884958744, 0.009521279484033585, -0.04618052393198013, 0.02816176973283291, -0.03977259248495102, -0.04827208071947098, 0.048044249415397644, 0.031553033739328384, -0.013286910019814968, -0.04846935346722603, 0.021118663251399994, -0.009148291312158108, 0.01426982693374157, 0.019476620480418205, -0.05256136134266853, 0.024507582187652588, 0.00012212405272293836, 0.05333540588617325, 0.03345203027129173, 0.032374635338783264, 0.03219005838036537, 0.017373081296682358, 0.031087752431631088, -0.06655700504779816, -0.009955832734704018, 0.03441235423088074, 0.02919805981218815, 0.07619979977607727, -0.015233660116791725, 0.02458306774497032, 0.008592192083597183, -0.026489505544304848, -0.025904254987835884, -0.002702872734516859, 9.609853441361338e-05, 0.01956511288881302, 0.04863889142870903, 0.0652344673871994, -0.01069958508014679, 0.009594440460205078, 0.04638155922293663, 0.06748662889003754, -0.012460005469620228, -0.012751922942698002, -0.004857973195612431, 0.004359367769211531, -0.041037872433662415, 0.004398624412715435, 0.013799718581140041, -0.003116463776677847, -0.01917988806962967, 0.01767786592245102, -0.03719392418861389, -0.0525314137339592, -0.055655188858509064, -0.04728839173913002, -0.0008236623834818602, -0.03137201443314552, -0.004173701163381338, -0.022142741829156876, 0.06085939332842827, 0.012680514715611935, 0.09939192235469818, 0.015797315165400505, -0.0009737874497659504, -0.033033598214387894, -0.026521705090999603, 0.002309446455910802, -0.00019024554057978094, 0.003105220617726445, -0.015737259760499, 0.03238927945494652, 0.05285561829805374, 0.07453236728906631, 0.010512602515518665, -0.02602635882794857, -0.00580034963786602, 0.021570583805441856, 0.022359885275363922, -0.016684483736753464, -0.005998498760163784, 0.04076433554291725, -0.021667497232556343, 0.019489748403429985, 0.050995513796806335, 0.004764164332300425, -0.01954377442598343, 0.008434475399553776, -0.02033155784010887, 0.031721729785203934, -0.0065946755930781364, -0.031120385974645615, -0.005834194831550121, 0.028653593733906746, -0.014560585841536522, 0.0022754573728889227, -0.005725066177546978, 0.02019825577735901, -0.0356823094189167, -0.0002555984538048506, 0.01974361576139927, -0.008504998870193958, -0.01009999681264162, 0.04283827170729637, -0.009018105454742908, -0.043819162994623184, -0.00484999967738986, 0.007064382545650005, -0.028512146323919296, 0.01708376221358776, 0.03335615620017052, -0.07587867230176926, 0.03408438339829445, -0.022202368825674057, -0.04811780899763107, 0.00286985095590353, -0.023506687954068184, -0.01913493312895298, 0.016641363501548767, 0.042343251407146454, 0.008976076729595661, 0.05506327375769615, -0.023071784526109695, 0.007257102057337761, 0.006796443834900856, -0.04913640767335892, -0.025895319879055023, 0.062452282756567, -0.021684417501091957, 0.025841018185019493, 0.01624671369791031, -0.023005925118923187, -0.004858024884015322, -0.01075603999197483, -0.024817591533064842, -0.0015236865729093552, 0.0001679820561548695, 0.0019175097113475204, 0.014955863356590271, -0.009093687869608402, -0.007088719867169857, -0.018663134425878525, -0.00137223768979311, 0.030000464990735054, 0.016661137342453003, 0.006700166966766119, 0.02190960943698883, -0.05115588754415512, 0.005104949697852135, -0.04442116618156433, 0.024104194715619087, 0.02147318795323372, 0.039859309792518616, -0.056715816259384155, -0.0020369626581668854, -0.004267476033419371, 0.01225421205163002, 0.0037794208619743586, -0.0644742026925087, -0.037709858268499374, 0.06387151032686234, -0.04234094172716141, 0.04279838502407074, -0.046357184648513794, 0.013814585283398628, -0.08498547971248627, 0.035020358860492706, 0.01779106818139553, -0.02189771831035614, -0.03103712946176529, -0.0058637200854718685, -0.03332214429974556, 0.001490510767325759, -0.014081544242799282, 0.014885049313306808, -0.023892799392342567, 0.032012633979320526, -0.056850410997867584, 0.005471059586852789, -0.006981397978961468, 0.023171668872237206, 0.04487380012869835, -0.009530812501907349, 0.012730184011161327, 0.044686783105134964, 0.05463799089193344, 0.0327654667198658, 0.01559726893901825, -0.03904477879405022, -0.005512998905032873, -0.0019490045960992575, 0.01949027180671692, 0.009922638535499573, 0.04459402337670326, -0.010860259644687176, 0.018829522654414177, -0.013290081173181534, 0.03064231015741825, 0.056494902819395065, -0.007825024425983429, -0.026399726048111916, -0.00852801650762558, 0.000660750491078943, 0.01652698777616024, -0.04757079854607582, -0.026024600490927696, -0.04804184287786484, 0.03483298420906067, 0.010347344912588596, -0.017555957660079002, 0.032191406935453415, -0.06357131153345108, 0.026082638651132584, 0.03111284226179123, -0.05163376033306122, -0.01606161706149578, -0.009836734272539616, -0.008714954368770123, -0.00021692550217267126, 0.0007527400157414377, -0.0043586138635873795, 0.011816009879112244, -0.02483704686164856, -0.021467113867402077, 0.0629681646823883, 0.037146057933568954, -0.0324607715010643, -0.0263593140989542, 0.0266280435025692, 0.00318652531132102, 0.0218060165643692, -0.016580892726778984, -0.01223750039935112, -0.04749026522040367, -0.002432774053886533, -0.043110497295856476, 0.050126928836107254, -0.03185199573636055, -0.02295146882534027, 0.007334615103900433, -0.008642623201012611, 0.03091326914727688, -0.03624243661761284, -0.005956348497420549, 0.003453076584264636, -0.03951254487037659, 0.02511836774647236, 0.03066384233534336, -0.006540827453136444, 0.014897432178258896, 0.02094314619898796, 0.014567478559911251, -0.0009719788795337081, 0.06085598096251488, -0.04188564792275429, 0.002943075727671385, -0.00031870315433479846, -0.027615897357463837, 0.04396434873342514, 0.0012515935814008117, -0.0036890238989144564, -0.027488216757774353, -0.03475029021501541, 0.034413862973451614, -0.03353452309966087, 0.017529141157865524, 0.010390943847596645, -0.005581127479672432, -0.006968662142753601, -0.0466565266251564, -0.025681177154183388, 0.018218982964754105, -0.026828791946172714, 0.030627982690930367, 0.01063482090830803, 0.022942909970879555, -0.014310473576188087, 0.00865194108337164, 0.005246851127594709, -0.05128142237663269, -0.05163690075278282, 0.02413022518157959, 6.035145270288922e-05, 0.03075026534497738, 0.048841629177331924, -0.036285050213336945, -0.038019176572561264, 0.05062630772590637, -0.03209219127893448, -0.025213126093149185, -0.008215781301259995, -0.00450828718021512, -0.035153042525053024, -0.02833746187388897, -0.02248970977962017, 0.024987801909446716, 0.024240126833319664, 0.0017948400927707553, 0.04436018690466881, 0.015677493065595627, 0.00966073852032423, -0.018546484410762787, -0.052099067717790604, 0.021909814327955246, 0.06414249539375305, -0.030237816274166107, 0.029096243903040886, 0.028400307521224022, -0.039928510785102844, -0.011236540041863918, 0.010845748707652092, -0.01325505506247282, -0.02952222153544426, 0.01722525805234909, 0.017661115154623985, -0.04034676402807236, 0.0011086738668382168, -0.05350649729371071, -0.0008832564926706254, -0.00995067972689867, 0.032689113169908524, -0.043263111263513565, 0.034355636686086655, 0.017030350863933563, -0.020405858755111694, 0.03774486109614372, 0.00547265587374568, -0.05255615711212158, -0.026874952018260956, 0.04013494774699211, 0.04013209417462349, 0.025882022455334663, -0.0017426429549232125, -0.011525198817253113, 0.013356874696910381, -0.01791173405945301, 0.03535732999444008, -0.03326773643493652, -0.02770179882645607, -0.016438854858279228, 0.0021983631886541843, 0.003293291199952364, 0.017590032890439034, 0.016400860622525215, -0.017975440248847008, -0.011375151574611664, -0.012284335680305958, -0.007316038478165865, 0.002399827353656292, -0.001476738485507667, 1.0141863640455995e-05, -0.08265001326799393, 0.07546071708202362, -0.013470101170241833, -0.04300509765744209, -0.017514118924736977, 0.030039137229323387, 0.027314085513353348, -0.023579025641083717, -0.04021870717406273, -0.042904362082481384, -0.03165512904524803, -0.061574842780828476, 0.0032292574178427458, 0.028306003659963608, -0.007946253754198551, 0.008506043814122677, 0.015496401116251945, 0.029612557962536812, -0.02903464250266552, 0.035942964255809784, 0.03707672283053398, 0.021744051948189735, -0.013562401756644249, -0.06108682602643967, -0.03031783550977707, 0.0602436326444149, -0.01510953065007925, -0.03610266372561455, -0.011283733882009983, -0.020340483635663986, -0.01942325383424759, -0.008391601964831352, -0.04054586961865425, -0.017853790894150734, -0.04096417874097824, 0.05911639332771301, -0.04623124375939369, 0.025253277271986008, -0.007961930707097054, 0.012133470736443996, -0.04937832057476044, 0.025662846863269806, 0.012533769011497498, 0.004939270671457052, -0.010278631001710892, 0.008567185141146183, 0.050029102712869644, 0.003106804098933935, -0.01973148249089718, -0.045122385025024414, -0.00939270667731762, 0.016758577898144722, -0.019294684752821922, -0.037006836384534836, 0.03271787241101265, 0.05640658363699913, -0.03195963427424431, -0.002479902934283018, -0.08510592579841614, -0.012758390977978706, 0.013416224159300327, -0.033270034939050674, -0.0007973170722834766, -0.011646757833659649, -0.00488206185400486, 0.015162936411798, 0.0108066750690341, 0.004830652382224798, 0.015807058662176132, -0.03356131538748741, 0.02468266896903515, -0.05267836153507233, -0.0287969708442688, 0.018398184329271317, -0.005497162230312824, -0.026495173573493958, 0.02575710415840149, -0.02501780167222023, -0.011855518445372581, 0.01783899962902069, 0.016523823142051697, -0.04145701974630356, 0.05563892051577568, 0.018455134704709053, -0.004979490302503109, 0.008657179772853851, -0.014350412413477898, 0.017816869542002678, 0.013394963927567005, -0.0225545484572649, -0.017150286585092545, -0.014982698485255241, 0.0624234564602375, -0.023538727313280106, 0.023815836757421494, -0.08649793267250061, 0.01843426190316677, 0.048396456986665726, 0.025200067088007927, 0.03521302342414856, -0.012600123882293701, -0.026045482605695724, -0.022698992863297462, -0.039294224232435226, -0.034596364945173264, 0.0291140154004097, 0.01836741901934147, 0.03586648777127266, 0.008380972780287266, 0.026525501161813736, -0.03932620584964752, -0.022645922377705574, -0.026420213282108307, 0.027651730924844742, 0.03154630586504936, 0.012872753664851189, -0.05836648494005203, -0.038700442761182785, 0.005566748324781656, 0.02770235948264599, -0.07739680260419846, -0.021013114601373672, 0.008849709294736385, -0.0635802373290062, -0.037524934858083725, 0.053896594792604446, 0.007693981286138296, -0.01462036557495594, 0.04210260510444641, 0.0010265754535794258, -0.03652897849678993, 0.04025201126933098, -0.04991445690393448, 0.05417405441403389, 0.04369377717375755, -0.013219840824604034, 0.04467105120420456, 0.0016731761861592531, -0.04879438504576683, -0.003192048752680421, -0.05342375487089157, -0.010908855125308037, 0.021364599466323853, -0.04555287957191467, -0.005906135309487581, 0.0074748508632183075, -0.03120272234082222, 0.013221979141235352, 0.0020467485301196575, 0.015825390815734863, 0.012641118839383125, -0.003205316374078393, 0.017511947080492973, 0.029303444549441338, -0.010562377981841564, 0.033438775688409805, 0.001278574694879353, -0.00832242053002119, -0.010915951803326607, -0.0050821551121771336, 0.013831023126840591, -0.013144690543413162, -0.009783933870494366, 0.014832193963229656, -0.005956039763987064, 0.006176362745463848, -0.01623590663075447, 0.020898764953017235, -0.05581313371658325, -0.008529732003808022, -0.03454029560089111, -0.014142930507659912, 0.011827795766294003, 0.014104082249104977, -0.026890158653259277, -0.0008064472931437194, 0.022233225405216217, -0.044515933841466904, -0.0312751941382885, 0.02463742159307003, -0.016517745330929756, -0.028665024787187576, 0.025397904217243195, 0.01059709768742323, -0.016224374994635582, 0.014790786430239677, -0.023049240931868553, -0.0007945243269205093, 0.01904328353703022, -0.012377497740089893, -0.042760469019412994, 0.027998829260468483, -0.010877512395381927, -0.003172120312228799, -0.021479319781064987, 0.007336319424211979, 0.003254625014960766, 0.03632175177335739, -0.00025415082927793264, -0.06913728266954422, 0.0444582961499691, -0.0013274451484903693, 0.010171856731176376, -0.03323165699839592, 0.032216429710388184, -0.00468665873631835, 0.0012197239557281137, -0.03888174146413803, -0.027965977787971497, 0.03824061527848244, 0.03937705606222153, -0.005715273320674896, 0.03635898604989052, -0.0011212332174181938, -0.05071480944752693, 0.018804511055350304, -0.024393197149038315, -0.021270567551255226, 0.03369315713644028, 0.062429092824459076, 0.011413052678108215, -0.010617278516292572, -0.0033884167205542326, 0.01611497439444065, 0.03573368489742279, 0.0273545254021883, -0.009080033749341965, -0.010893457569181919, 0.059084050357341766, -0.027877911925315857, 0.02963397465646267, -0.004450538195669651, -0.03537893295288086, 0.017137490212917328, 0.013180392794311047, 0.021031204611063004, 0.05179160460829735, 0.04291229322552681, 0.009676727466285229, -0.042955897748470306, -0.07401128113269806, -0.005647455342113972, 0.006301769521087408, 0.034709684550762177, -0.00492066377773881, 0.014649061486124992, 0.0047159260138869286, -0.02851254865527153, 0.0033762850798666477, 0.06120862066745758, -0.007481797132641077, -0.010820640251040459, -0.016394825652241707, 0.025390345603227615, -0.04319676384329796, 0.04589197412133217, 0.01837175525724888, 0.005391274578869343, 0.02584153600037098, -0.008418353274464607, 0.011387174017727375, 0.0015029879286885262, -0.005093506071716547, 0.014051862992346287, 0.011331203393638134, 0.025885697454214096, 0.020144887268543243, 0.004722125828266144, 0.008589561097323895, 0.05613291263580322, -0.030171314254403114, -0.012939552776515484, -0.0007372614345513284, -0.01786070317029953, 0.005612918175756931, 0.014163665473461151, 0.011123455129563808, -0.025279810652136803, 0.014982962049543858, 0.021799352020025253, 0.008098488673567772, 0.023617476224899292, -0.01321226917207241, -0.03035244159400463, 0.06300585716962814, -0.0017082983395084739, 0.016935596242547035, -0.015479663386940956, 0.05396709218621254, -0.04700633883476257, 0.06204821914434433, 0.003135967068374157, -0.04062909260392189, -0.0362183153629303, 0.03383960574865341, -0.04181458055973053, -0.05809428542852402, -0.054695893079042435, -0.001733630197122693, 0.005917607340961695, -0.01767047308385372, 0.02627820521593094, -0.016769615933299065, -0.007416990119963884, 0.008251821622252464, -0.016786783933639526, 0.039685577154159546, 0.046886298805475235, -0.03867030888795853, -0.002814202569425106, 0.007892468944191933, 0.02072514221072197, -0.019827721640467644, -0.0015886119799688458, -0.03254994750022888, -0.037779614329338074, -0.007901632227003574, -0.019213782623410225, 0.034767575562000275, -0.029835674911737442, -0.028487712144851685, 0.013765687122941017, -0.050568222999572754, -0.05108531937003136, 0.028739603236317635, 0.006997654680162668, -0.021554742008447647, 0.021372230723500252, 0.021178681403398514, 0.023837337270379066, 0.023853376507759094, -0.02738768979907036, -0.015781980007886887, 0.05501457676291466, 0.024376938119530678, -0.02811945043504238, 0.031820740550756454, -0.005692190956324339, -0.0167120061814785, -0.01879412867128849, 0.0018797912634909153, 0.021650541573762894, -0.03387384116649628, -0.008616333827376366, 0.0033766734413802624, 0.019643094390630722, -0.04822556674480438, -0.016628323122859, 0.03083554469048977, -0.013973399996757507, 0.03139633312821388, -0.04712454602122307, -0.048945553600788116, -0.01584695465862751, -0.025097297504544258, 0.0007133444305509329, -0.0018528754590079188, 0.007813619449734688, 0.013178582303225994, 0.03337487205862999, -0.009507269598543644, -0.06259062141180038, 0.2559143900871277, 0.012733114883303642, 0.026578156277537346, 0.02339700050652027, -0.014654736965894699, 0.035242706537246704, 0.02522038109600544, -0.010312406346201897, -0.012923394329845905, -0.03429264575242996, 0.019129382446408272, -0.013274310156702995, 0.025625433772802353, -0.010216091759502888, 0.003879065392538905, 0.01790439523756504, 0.030784979462623596, 0.04098392650485039, 0.05014337599277496, -0.05559558793902397, -0.03677109628915787, 0.015427793376147747, -0.005083793308585882, 0.015120198018848896, -0.025619598105549812, -0.006277056410908699, 0.002780785784125328, 0.011669964529573917, 0.03279343247413635, -0.05522839352488518, 0.016536805778741837, -0.03090170957148075, 0.019413955509662628, -0.004875530954450369, -0.04683270305395126, -0.012802250683307648, -0.050167739391326904, -0.014227673411369324, -0.02511955238878727, -0.014870339073240757, 0.04144902899861336, 0.03558841720223427, -0.011204957962036133, -0.03172820061445236, -0.02091030962765217, 0.031131915748119354, 0.011320066638290882, 0.030869631096720695, 0.020962901413440704, -0.0006403184961527586, 0.01702914386987686, -0.02613428421318531, 0.0361393466591835, 0.003163991728797555, -0.03827638924121857, 0.0024991335812956095, 0.018053149804472923, -0.022139113396406174, 0.026488590985536575, 0.016433758661150932, -0.05274081602692604, 0.020631887018680573, -0.010170391760766506, 0.0030810926109552383, -0.0468614399433136, 0.009699639864265919, 0.017219919711351395, -0.014988659881055355, -0.04337825998663902, -0.0041097491048276424, -0.031971678137779236, -0.026196328923106194, -0.01790686510503292, -0.04874570667743683, 0.02842884324491024, -0.0013465301599353552, -0.058186501264572144, 0.022193025797605515, -0.01174273993819952, -0.02975834347307682, 0.044674165546894073, -0.04925072565674782, 0.030183974653482437, 0.04022785276174545, 0.028530560433864594, -0.03292233869433403, -0.013727006502449512, -0.011093120090663433, -0.005456676706671715, 0.017279792577028275, 0.011725067161023617, 0.00934798363596201, 0.04143444821238518, 0.014479548670351505, 0.0031773746013641357], index=0, object='embedding')], model='BAAI/bge-large-en-v1.5', object='list', usage=Usage(prompt_tokens=7, total_tokens=7, completion_tokens=0, prompt_tokens_details=None), id='embd-efeebdf86c91402982f8fd2b1622d6b3', created=1768555000, meta={'usage': {'credits_used': 1}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/video-models/bytedance.md # Source: https://docs.aimlapi.com/api-references/image-models/bytedance.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/bytedance.md # ByteDance - [Seed 1.8](/api-references/text-models-llm/bytedance/seed-1.8.md) --- # Source: https://docs.aimlapi.com/faq/call-api-in-the-asynchronous-mode.md # Can I call API in the asynchronous mode? Sure, any of our available models. Let's see how this works with an example in Python. ## Example in Python Below, we will see how two requests are handled when the second one is shorter and lighter than the first. We will compare synchronous processing (first example) and asynchronous processing (second example). After each example, the **Response** section shows the model's output for both queries. Pay attention to the order in which the answers are returned in each response! ### **Synchronous call:** ```python from openai import OpenAI def complete_chat(question): api_key = '' client = OpenAI( base_url='https://api.aimlapi.com', api_key=api_key, ) response = client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": question}], ) print(f"Response for: {question}\n{response}\n") def main(): long_question = "List the 5 most famous hockey players of the 20th century." short_question = "What is 2+2?" # Execute both requests sequentially complete_chat(long_question) complete_chat(short_question) if __name__ == "__main__": main() ```
Response {% code overflow="wrap" %} ```jsonp Response for: List the 5 most famous hockey players of the 20th century. ChatCompletion(id='chatcmpl-B2cvJsSA2txXAYTWIfxIg55DYBchm', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The 20th century saw numerous iconic hockey players who made significant impacts on the game. Here are five of the most famous:\n\n1. **Wayne Gretzky** - Often referred to as "The Great One," Gretzky set numerous records and is widely considered the greatest hockey player of all time. He played the majority of his career in the NHL with the Edmonton Oilers and the Los Angeles Kings.\n\n2. **Bobby Orr** - Renowned for revolutionizing the role of the defenseman in hockey, Orr\'s incredible speed, scoring, and playmaking abilities made him a standout player, primarily with the Boston Bruins.\n\n3. **Gordie Howe** - Known as "Mr. Hockey," Howe\'s career spanned several decades, and he was famous for his toughness, skill, and scoring ability. He played the majority of his career with the Detroit Red Wings.\n\n4. **Mario Lemieux** - Known as "Super Mario," Lemieux was a dominant force with the Pittsburgh Penguins, overcoming numerous health challenges to become one of the game\'s all-time greats.\n\n5. **Maurice "Rocket" Richard** - As a prolific goal scorer, Richard became the first player to score 50 goals in a season and 500 in a career. He played his entire career with the Montreal Canadiens and was an inspiration to generations of players.\n\nEach of these players not only excelled on the ice but also left a lasting legacy on the sport.', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1739965977, model='gpt-4o-2024-08-06', object='chat.completion', service_tier=None, system_fingerprint='fp_523b9b6e5f', usage=CompletionUsage(completion_tokens=9293, prompt_tokens=231, total_tokens=9524, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))) Response for: What is 2+2? ChatCompletion(id='chatcmpl-B2cvP4PDesi5QipRvtzNHH6vYkszq', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='2+2 equals 4.', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1739965983, model='gpt-4o-2024-08-06', object='chat.completion', service_tier=None, system_fingerprint='fp_523b9b6e5f', usage=CompletionUsage(completion_tokens=252, prompt_tokens=147, total_tokens=399, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))) ``` {% endcode %}
### **Asynchronous call**: ```python import asyncio from openai import AsyncOpenAI async def complete_chat(question): api_key = '' client = AsyncOpenAI( base_url='https://api.aimlapi.com', api_key=api_key, ) response = await client.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": question}], ) print(f"Response for: {question}\n{response}\n") async def main(): long_question = "List the 5 most famous hockey players of the 20th century." short_question = "What is 2+2?" # Run both requests concurrently await asyncio.gather( complete_chat(long_question), complete_chat(short_question), ) if __name__ == "__main__": try: asyncio.run(main()) # Works in a regular Python script except RuntimeError: loop = asyncio.get_event_loop() loop.run_until_complete(main()) # Works in Jupyter and other environments ```
Response {% code overflow="wrap" %} ```jsonp Response for: What is 2+2? ChatCompletion(id='chatcmpl-B2cmWSVgRW5N7bq4Plj3C39881Fc3', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='2 + 2 equals 4.', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1739965432, model='gpt-4o-2024-08-06', object='chat.completion', service_tier=None, system_fingerprint='fp_523b9b6e5f', usage=CompletionUsage(completion_tokens=284, prompt_tokens=147, total_tokens=431, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))) Response for: List the 5 most famous hockey players of the 20th century. ChatCompletion(id='chatcmpl-B2cmWL39tvXjlmSGBuA1ckWDiOqQ5', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content='The 20th century saw many legendary hockey players who left a significant impact on the sport. Here are five of the most famous:\n\n1. **Wayne Gretzky** - Often referred to as "The Great One," Gretzky is widely considered the best hockey player of all time. His records and achievements have set the standard for excellence in the NHL.\n\n2. **Gordie Howe** - Known as "Mr. Hockey," Howe\'s career spanned five decades, and he was renowned for his scoring ability, physical play, and longevity in the sport.\n\n3. **Bobby Orr** - Orr revolutionized the defenseman position with his offensive skills and skating ability. He is most famous for his time with the Boston Bruins and his iconic "flying goal."\n\n4. **Maurice "Rocket" Richard** - A goal-scoring machine, Richard became the first player in NHL history to score 50 goals in a single season and 500 in a career. He was an icon for the Montreal Canadiens and a hero in Quebec.\n\n5. **Mario Lemieux** - Known as "Super Mario," Lemieux was an incredibly skilled player who overcame health challenges to become one of the most prolific scorers in NHL history.\n\nThese players not only dominated in their respective eras but also contributed to the evolution and popularity of hockey worldwide.', refusal=None, role='assistant', audio=None, function_call=None, tool_calls=None))], created=1739965432, model='gpt-4o-2024-08-06', object='chat.completion', service_tier=None, system_fingerprint='fp_523b9b6e5f', usage=CompletionUsage(completion_tokens=8537, prompt_tokens=231, total_tokens=8768, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=0, reasoning_tokens=0, rejected_prediction_tokens=0), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0))) ``` {% endcode %}
As we can see, in the case of asynchronous execution, the response to a shorter or lighter query may be returned faster than the response to a longer or more complex one, even if the lighter query was formally queued second. --- # Source: https://docs.aimlapi.com/faq/can-i-use-api-in-nodejs.md # Can I use API in NodeJS? Yes, definitely! Here is a quick guide on how to start your adventure with AI/ML API in NodeJS. ## Installation ### Is it already installed? Before using an API in NodeJS, you need to ensure that NodeJS is installed on your system. The simplest way is to run a terminal and execute the following command: ```bash node --version ``` If this command prints a NodeJS version, then you can proceed to the [example article](https://docs.aimlapi.com/quickstart/setting-up#example-in-node.js). If not, you need to install NodeJS on your system. The installation steps depend on your operating system, but here are some quick instructions to get you started: ### On Windows / Mac Install the NodeJS package from the [official distribution site](https://nodejs.org/en). It is preferable to choose the LTS version, but it all depends on your project. ### On Linux The installation process depends on your distribution. For example, on Ubuntu, you can use the following command to install version 20: ```bash curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash - ``` ## Using Test your installation and you can proceed with the tutorial on the [quickstart](https://docs.aimlapi.com/quickstart/setting-up) page. --- # Source: https://docs.aimlapi.com/faq/can-i-use-api-in-python.md # Can I use API in Python? Of course you can! Here is a quick guide on how to configure your environment and use our API. ## Installation Our API is just an interface, which means you need to use it within some application or code. When you choose Python, you first need to install it on your system. ### Is it already installed? Depending on your operating system and version, it might already be installed out-of-the-box. To test it, you need to open the [terminal ](https://docs.aimlapi.com/glossary/concepts#terminal)and type one of the following commands: ```bash python3 --version python --version py --version ``` If any of these commands print a Python version higher than 3.8, as shown below, then Python is properly installed: ```bash Python 3.11.0 ``` If your result is different (for example, if it just prints "Python" without a version or if on Windows 11 the Microsoft Store opens), then it isn't installed. ### On Windows You can install Python from the [Microsoft Store marketplace](https://apps.microsoft.com/detail/9pjpw5ldxlz5?hl=en-US\&gl=US) if you are using a newer version of Windows 11, or you can follow the [official Python distribution site](https://www.python.org/downloads/) and install it as a usual executable from there. You can safely install version 3.10 as most modern modules support it, or the latest version if you wish. ### On Mac There are several good articles that describe the installation process. You can look at one [here](https://docs.python-guide.org/starting/install3/osx/) or search the Internet for more options. ### On Linux The installation process varies depending on your Linux distribution. Here is an example for Ubuntu (make sure that Python isn't installed, as described in the beginning): ```bash sudo apt update sudo apt install -y python3-venv ``` ## Using Test your installation and you can proceed with the tutorial on the [quickstart](https://docs.aimlapi.com/quickstart/setting-up) page. --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3-haiku.md # Claude 3 Haiku {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `anthropic/claude-3-haiku` * `anthropic/claude-3-haiku-20240307` * `claude-3-haiku-20240307` * `claude-3-haiku-latest` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The quick and streamlined model, offering near-instant responsiveness. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["claude-3-haiku-20240307","claude-3-haiku","anthropic/claude-3-haiku-20240307","claude-3-haiku-latest"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-3-haiku-20240307"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"claude-3-haiku-latest", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'claude-3-haiku-latest', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'msg_01Fd4uU3AZ3TXzSpSKN7oeDP', 'object': 'chat.completion', 'model': 'claude-3-haiku-20240307', 'choices': [{'index': 0, 'message': {'reasoning_content': '', 'content': 'Hello! How can I assist you today?', 'role': 'assistant'}, 'finish_reason': 'end_turn', 'logprobs': None}], 'created': 1744218395, 'usage': {'prompt_tokens': 4, 'completion_tokens': 32, 'total_tokens': 36}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3-opus.md # Claude 3 Opus

This documentation is valid for the following list of our models:

  • anthropic/claude-3-opus
  • anthropic/claude-3-opus-20240229
  • claude-3-opus-20240229
  • claude-3-opus-latest
Try in Playground
## Model Overview A highly capable multimodal model designed to process both text and image data. It excels in tasks requiring complex reasoning, mathematical problem-solving, coding, and multilingual text understanding. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-3-opus","anthropic/claude-3-opus-20240229","claude-3-opus-20240229","claude-3-opus-latest"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-3-opus-20240229"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"claude-3-opus-latest", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'claude-3-opus-latest', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'msg_013njSJ6FKESFossfd8UHddJ', 'object': 'chat.completion', 'model': 'claude-3-opus-20240229', 'choices': [{'index': 0, 'message': {'reasoning_content': '', 'content': 'Hello! How can I assist you today?', 'role': 'assistant'}, 'finish_reason': 'end_turn', 'logprobs': None}], 'created': 1744218476, 'usage': {'prompt_tokens': 252, 'completion_tokens': 1890, 'total_tokens': 2142}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.5-haiku.md # Claude 3.5 Haiku

This documentation is valid for the following list of our models:

  • anthropic/claude-3-5-haiku
  • anthropic/claude-3-5-haiku-20241022
  • claude-3-5-haiku-20241022
  • claude-3-5-haiku-latest
Try in Playground
## Model Overview A cutting-edge model designed for rapid data processing and advanced reasoning capabilities. Excels in coding assistance, customer service interactions, and content moderation. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-3-5-haiku","anthropic/claude-3-5-haiku-20241022","claude-3-5-haiku-20241022","claude-3-5-haiku-latest"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-3-5-haiku-20241022"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"claude-3-5-haiku-latest", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'claude-3-5-haiku-latest', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'msg_01QfRmDBXVWcARjbwZBbJxrR', 'object': 'chat.completion', 'model': 'claude-3-5-haiku-20241022', 'choices': [{'index': 0, 'message': {'reasoning_content': '', 'content': 'Hi there! How are you doing today? Is there anything I can help you with?', 'role': 'assistant'}, 'finish_reason': 'end_turn', 'logprobs': None}], 'created': 1744218440, 'usage': {'prompt_tokens': 17, 'completion_tokens': 221, 'total_tokens': 238}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.7-sonnet.md # Claude 3.7 Sonnet

This documentation is valid for the following list of our models:

  • anthropic/claude-3.7-sonnet
  • claude-3-7-sonnet-20250219
  • claude-3-7-sonnet-latest
Try in Playground
## Model Overview A hybrid reasoning model, designed to tackle complex tasks. It introduces a dual-mode operation, combining standard language generation with extended thinking capabilities. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-3.7-sonnet","claude-3-7-sonnet-20250219","claude-3-7-sonnet-latest"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-3-7-sonnet-20250219"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"anthropic/claude-3.7-sonnet", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'anthropic/claude-3.7-sonnet', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'msg_01MmQNxa1E5mU8EyMXzT9zEU', 'object': 'chat.completion', 'model': 'claude-3-7-sonnet-20250219', 'choices': [{'index': 0, 'message': {'reasoning_content': '', 'content': "Hello! How can I assist you today? Whether you have a question, need information, or would like to discuss a particular topic, I'm here to help. What's on your mind?", 'role': 'assistant'}, 'finish_reason': 'end_turn', 'logprobs': None}], 'created': 1744218600, 'usage': {'prompt_tokens': 50, 'completion_tokens': 1323, 'total_tokens': 1373}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-5-sonnet.md # Claude 4.5 Sonnet

This documentation is valid for the following list of our models:

  • claude-sonnet-4-5
  • anthropic/claude-sonnet-4-5
  • claude-sonnet-4-5-20250929
Try in Playground
## Model Overview A major improvement over [Claude 4 Sonnet,](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-sonnet) offering better coding abilities, stronger reasoning, and more accurate responses to your instructions. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-sonnet-4.5","claude-sonnet-4-5","claude-sonnet-4-5-20250929"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-sonnet-4-5-20250929"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"anthropic/claude-sonnet-4.5", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'anthropic/claude-sonnet-4.5', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "msg_011MNbgezv2p5BBE9RvnsZV9", "object": "chat.completion", "model": "claude-sonnet-4-20250514", "choices": [ { "index": 0, "message": { "reasoning_content": "", "content": "Hello! How are you doing today? Is there anything I can help you with?", "role": "assistant" }, "finish_reason": "end_turn", "logprobs": null } ], "created": 1748522617, "usage": { "prompt_tokens": 50, "completion_tokens": 630, "total_tokens": 680 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-opus.md # Claude 4 Opus

This documentation is valid for the following model:

  • anthropic/claude-opus-4
Try in Playground
## Model Overview The leading coding model globally, consistently excelling at complex, long-duration tasks and agent-based workflows. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-opus-4","claude-opus-4","claude-opus-4-20250514"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-opus-4-20250514"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"anthropic/claude-opus-4", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'anthropic/claude-opus-4', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "msg_01BDDxHJZjH3UBwLrZBUiASE", "object": "chat.completion", "model": "claude-opus-4-20250514", "choices": [ { "index": 0, "message": { "reasoning_content": "", "content": "Hello! How can I help you today?", "role": "assistant" }, "finish_reason": "end_turn", "logprobs": null } ], "created": 1748529508, "usage": { "prompt_tokens": 252, "completion_tokens": 1890, "total_tokens": 2142 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-sonnet.md # Claude 4 Sonnet

This documentation is valid for the following list of our models:

  • anthropic/claude-sonnet-4
  • claude-sonnet-4
  • claude-sonnet-4-20250514
Try in Playground
## Model Overview A major improvement over [Claude ](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.7-sonnet)[3.7 Sonnet](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.7-sonnet), offering better coding abilities, stronger reasoning, and more accurate responses to your instructions. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-sonnet-4","claude-sonnet-4","claude-sonnet-4-20250514"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-sonnet-4-20250514"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"anthropic/claude-sonnet-4", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'anthropic/claude-sonnet-4', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "msg_011MNbgezv2p5BBE9RvnsZV9", "object": "chat.completion", "model": "claude-sonnet-4-20250514", "choices": [ { "index": 0, "message": { "reasoning_content": "", "content": "Hello! How are you doing today? Is there anything I can help you with?", "role": "assistant" }, "finish_reason": "end_turn", "logprobs": null } ], "created": 1748522617, "usage": { "prompt_tokens": 50, "completion_tokens": 630, "total_tokens": 680 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.5-haiku.md # Claude 4.5 Haiku

This documentation is valid for the following list of our models:

  • claude-haiku-4-5
  • anthropic/claude-haiku-4.5
  • claude-haiku-4-5-20251001
Try in Playground
## Model Overview The model offers coding performance comparable to [Claude Sonnet 4](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-sonnet), but at one-third the cost and more than twice the speed. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-haiku-4.5","claude-haiku-4-5","claude-haiku-4-5-20251001"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-haiku-4-5-20251001"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"anthropic/claude-haiku-4.5", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'anthropic/claude-haiku-4.5', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "msg_01HbdLU9f78VAHxuYZ7Qp9Y1", "object": "chat.completion", "model": "claude-haiku-4-5-20251001", "choices": [ { "index": 0, "message": { "reasoning_content": "", "content": "Hello! 👋 How can I help you today?", "role": "assistant" }, "finish_reason": "end_turn", "logprobs": null } ], "created": 1760650965, "usage": { "prompt_tokens": 8, "completion_tokens": 16, "total_tokens": 24 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.5-opus.md # Claude 4.5 Opus

This documentation is valid for the following list of our models:

  • anthropic/claude-opus-4-5
  • claude-opus-4-5
  • claude-opus-4-5-20251101
Try in Playground
## Model Overview A high-performance chat model that delivers state-of-the-art results on real-world software engineering benchmarks. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-opus-4-5","claude-opus-4-5","claude-opus-4-5-20251101"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-opus-4-5-20251101"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"claude-opus-4-5", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'claude-opus-4-5', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "msg_01NxAGYo8VfNu5UAEdmQjv62", "object": "chat.completion", "model": "claude-opus-4-5-20251101", "choices": [ { "index": 0, "message": { "reasoning_content": "", "content": "Hello! How are you doing today? Is there something I can help you with?", "role": "assistant" }, "finish_reason": "end_turn", "logprobs": null } ], "created": 1764265437, "usage": { "prompt_tokens": 8, "completion_tokens": 20, "total_tokens": 28 }, "meta": { "usage": { "tokens_used": 1134 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-opus-4.1.md # Claude 4.1 Opus

This documentation is valid for the following list of our models:

  • anthropic/claude-opus-4.1
  • claude-opus-4-1
  • claude-opus-4-1-20250805
Try in Playground
{% hint style="success" %} All three IDs listed above refer to the same model; we support them for backward compatibility. {% endhint %} ## Model Overview An upgrade to [Claude Opus 4](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-opus) on agentic tasks, real-world coding, and thinking. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [code examples](#code-example-1-without-thinking) that show how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthropic/claude-opus-4.1","claude-opus-4-1","claude-opus-4-1-20250805"]},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"source":{"type":"object","properties":{"type":{"type":"string","enum":["base64"]},"media_type":{"type":"string","enum":["image/jpeg","image/png","image/gif","image/webp"]},"data":{"type":"string"}},"required":["type","media_type","data"]}},"required":["type","source"]},{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["function"]},"content":{"type":"string"},"name":{"type":"string"}},"required":["role","content","name"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"],"additionalProperties":false},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"top_p":{"type":"number","minimum":0.1,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"]},"function":{"type":"object","properties":{"name":{"type":"string"}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."}},"required":["model","messages"],"title":"claude-opus-4-1-20250805"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example #1: Without Thinking {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"anthropic/claude-opus-4.1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'anthropic/claude-opus-4.1', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "msg_018y2VPSZ5nNnqS3goMsjMxE", "object": "chat.completion", "model": "claude-opus-4-1-20250805", "choices": [ { "index": 0, "message": { "reasoning_content": "", "content": "Hello! How can I help you today?", "role": "assistant" }, "finish_reason": "end_turn", "logprobs": null } ], "created": 1754552562, "usage": { "prompt_tokens": 252, "completion_tokens": 1890, "total_tokens": 2142 } } ``` {% endcode %}
## Code Example #2: Thinking Enabled {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"anthropic/claude-opus-4.1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hell } ], "max_tokens": 1025, # must be greater than 'budget_tokens' "thinking":{ "budget_tokens": 1024, "type": "enabled" } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'anthropic/claude-opus-4.1', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ], max_tokens: 1025, // must be greater than 'budget_tokens' thinking:{ budget_tokens: 1024, type: 'enabled' } }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "msg_01G9P4b9HG3PeKm1rRvS8kop", "object": "chat.completion", "model": "claude-opus-4-1-20250805", "choices": [ { "index": 0, "message": { "reasoning_content": "The human has greeted me with a simple \"Hello\". I should respond in a friendly and helpful manner, acknowledging their greeting and inviting them to share how I can assist them today.", "content": "Hello! How can I help you today?", "role": "assistant" }, "finish_reason": "end_turn", "logprobs": null } ], "created": 1755704373, "usage": { "prompt_tokens": 1134, "completion_tokens": 9450, "total_tokens": 10584 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/integrations/cline.md # Cline ## About Cline is an open-source AI coding assistant with two working modes (Plan/Act), terminal command execution, and support for the Model Context Protocol (MCP) in VS Code. You can find the Cline repository and community on [GitHub](https://github.com/cline). ## Installing Cline in VS Code 1. Open the **Extensions** tab in the VS Code sidebar.
2. In the search bar, type **Cline**. 3. Find the extension and click **Install**.
4. After installation, a separate **Cline** tab will appear in the sidebar.
## **Configuring Cline** 1. Go to the **Cline** tab in the sidebar. 2. Click the gear icon in the top-right corner.
In the settings: * Set **API Provider** to **OpenAI Compatible**. * In **Base URL**, enter one of our available endpoints. * In **API Key**, enter your [AI/ML API key](https://aimlapi.com/app/keys). * In **Model ID**, specify the model name. You can find some model selection tips in our [description of code generation as a capability](https://docs.aimlapi.com/capabilities/code-generation). * Click **Save**. All done — start coding with Cline! ## Usage Example Here’s the request we made: ``` Create a Python file named test and add code to print Hello, world ```
If you expand the **API Request** section, you can view the data — including your prompt. Since I asked to create a file in the request, the file was generated. You can see a preview and its contents, but it hasn’t been saved yet. To save the file, Cline asks for confirmation.
Once the file is saved, a second API request appears with metadata, along with a notification that the task was successfully completed. ## **Supported Models** These models have been tested by our team for compatibility with Cline integration.
Supported Model List * [gpt-3.5-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-3.5-turbo-0125](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-3.5-turbo-1106](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-05-13](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-08-06](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) * [gpt-4o-mini-2024-07-18](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) * [chatgpt-4o-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-05-13](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-08-06](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [gpt-4-turbo-2024-04-09](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [gpt-4-0125-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-preview) * [gpt-4-1106-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-preview) * [o3-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3-mini) * [openai/gpt-4.1-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1) * [openai/gpt-4.1-mini-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-mini) * [openai/gpt-4.1-nano-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-nano) * [openai/o4-mini-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini) * [deepseek/deepseek-chat](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-chat) * [deepseek/deepseek-r1](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-r1) * [meta-llama/Llama-3.3-70B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.3-70b-instruct-turbo) * [meta-llama/Llama-3.2-3B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.2-3b-instruct-turbo) * [meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-405b-instruct-turbo) * [meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-8b-instruct-turbo) * [meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-70b-instruct-turbo) * [meta-llama/llama-4-maverick](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-4-maverick) * [Qwen/Qwen2.5-7B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-7b-instruct-turbo) * [qwen-max](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-max) * [qwen-max-2025-01-25](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-max) * [qwen-plus](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-plus) * [qwen-turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-turbo) * [Qwen/Qwen2.5-72B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-72b-instruct-turbo) * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mixtral-8x7b-instruct-v0.1) * [mistralai/Mistral-7B-Instruct-v0.1](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-7b-instruct) * [mistralai/Mistral-7B-Instruct-v0.2](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-7b-instruct) * [mistralai/Mistral-7B-Instruct-v0.3](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-7b-instruct) * [mistralai/mistral-tiny](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-tiny) * [mistralai/mistral-nemo](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-nemo) * [google/gemini-2.0-flash-exp](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash-exp) * [gemini-2.0-flash-exp](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash-exp) * [google/gemini-2.0-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash) * [x-ai/grok-3-beta](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-beta) * [x-ai/grok-3-mini-beta](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-mini-beta) * [anthracite-org/magnum-v4-72b](https://docs.aimlapi.com/api-references/text-models-llm/anthracite/magnum-v4) * [MiniMax-Text-01](https://docs.aimlapi.com/api-references/text-models-llm/minimax/text-01)
## Troubleshooting Possible Issues: * **403 status code (no body)** — This is the most common error. Possible causes: * You might need to use a different endpoint. Be sure to refer to the documentation for the specific model you've selected from our catalog! * The user may have run out of tokens or doesn’t have enough. Check your balance in your account dashboard. * **400 status code (no body)** — This error occurs when using models that are not compatible with the integration. See the previous section [Supported Models](#supported-models) :point\_up: --- # Source: https://docs.aimlapi.com/capabilities/code-generation.md # Code Generation ## Overview While all text models can write code in various languages upon request, some models are specifically trained for such tasks. These specialized models excel in generating functions, scripts, or even entire applications by understanding user intent and translating it into syntactically correct code. They support multiple programming languages and can provide solutions ranging from simple algorithms to complex system components. Beyond code generation, AI models help with debugging, refactoring, and optimization. Developers can ask for explanations of code snippets, receive suggestions for improvements, or convert code between languages. This capability streamlines development workflows, reduces repetitive tasks, and enhances productivity. ## Models That Support Code Generation Let's go over this again: any [text chat model](https://docs.aimlapi.com/api-references/model-database#text-models-llm) can generate some code based on your request. However, here is a list of models specifically trained for this by the developer: * [google/gemini-3-flash-preview](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-3-flash-preview) * [alibaba/qwen3-coder-480b-a35b-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-coder-480b-a35b-instruct) * [alibaba/qwen3-next-80b-a3b-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-next-80b-a3b-instruct) * [alibaba/qwen3-max-preview](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-max-preview) * [alibaba/qwen3-max-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-max-instruct) * [anthropic/claude-opus-4](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-opus) * [anthropic/claude-sonnet-4](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-sonnet) * [anthropic/claude-opus-4.1](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-opus-4.1) * [anthropic/claude-sonnet-4.5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-5-sonnet) * [anthropic/claude-haiku-4.5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.5-haiku) * [anthropic/claude-opus-4-5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.5-opus) * [minimax/m2](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2) * [minimax/m2-1](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-1) * [moonshot/kimi-k2-preview](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-preview) * [moonshot/kimi-k2-0905-preview](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-preview) * [google/gemini-2.5-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash) * [google/gemini-2.5-pro](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-pro) * [google/gemini-3-pro-preview](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-3-pro-preview) * [openai/gpt-5-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5) * [openai/gpt-5-mini-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-mini) * ​[openai/gpt-5-1​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1) * [​openai/gpt-5-1-chat-latest​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-chat-latest) * [​openai/gpt-5-1-codex​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex) * [​openai/gpt-5-1-codex-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex-mini) * [openai/gpt-5-2](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2) * [openai/gpt-5-2-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-chat-latest) * [openai/gpt-5-2-codex](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-codex) * [x-ai/grok-code-fast-1](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-code-fast-1) * [zhipu/glm-4.5-air](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5-air) * [zhipu/glm-4.5](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5) * [zhipu/glm-4.6](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.6) * [zhipu/glm-4.7](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.7) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/cohere.md # Cohere - [command-a](/api-references/text-models-llm/cohere/command-a.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/cohere/command-a.md # command-a

This documentation is valid for the following list of our models:

  • cohere/command-a
Try in Playground
## Model Overview A powerful LLM with advanced capabilities for enterprise applications. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["cohere/command-a"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."}},"required":["model","messages"],"title":"cohere/command-a"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"cohere/command-a", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'cohere/command-a', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1752165706-Nd1dXa1kuCCoOIpp5oxy", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I assist you today?", "reasoning_content": null, "refusal": null } } ], "created": 1752165706, "model": "cohere/command-a", "usage": { "prompt_tokens": 5, "completion_tokens": 189, "total_tokens": 194 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/service-endpoints/complete-model-list.md # Complete Model List ## Get Model List via API You can query the complete list of available models through this API.\ No API key is required for this request. You can also simply open [this list](https://api.aimlapi.com/models) in any web browser.
## GET /models > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/models":{"get":{"operationId":"ModelsController_getModels_v1","responses":{"200":{"description":"A list of available models.","content":{"application/json":{"schema":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier of the model."},"type":{"type":"string","description":"Model interaction type."},"info":{"type":"object","description":"Metadata describing the model.","properties":{"name":{"type":"string","description":"Human-readable model name."},"developer":{"type":"string","description":"Organization or company that developed the model."},"description":{"type":"string","description":"Short description of the model and its primary capabilities."},"contextLength":{"type":"integer","description":"Maximum supported context window size in tokens."},"maxTokens":{"type":"integer","description":"Maximum number of tokens that can be generated in a single response."},"url":{"type":"string","format":"uri","description":"Public model landing page on AIML API website."},"docs_url":{"type":"string","format":"uri","description":"Link to the official API documentation for this model."}},"required":["name","developer","description","url","docs_url"]},"features":{"type":"array","description":"List of supported features and API capabilities for the model.","items":{"type":"string"}},"endpoints":{"type":"array","description":"API endpoints through which this model can be accessed.","items":{"type":"string"}}},"required":["id","type","info","features","endpoints"]}}}}}}}}}} ``` ## Output Examples by Model Type As of early 2026, this endpoint returns a list of more than 400 models. Each item represents a single model identified by a unique ID. Depending on the model category (chat, video, etc.), the set of fields in each item may vary slightly, so below we provide representative examples from the main model categories. #### Example output item for a chat model Unlike other types of models, every chat model includes a non-empty `features` list that clearly shows what the model can do: support for streaming, instructions for SYSTEM or DEVELOPER roles besides the regular prompt, whether the model is described by the developer as “thinking”, etc. For more details on many of these, see the [CAPABILITIES](https://docs.aimlapi.com/capabilities/completion-or-chat-models) section of this documentation portal. {% code overflow="wrap" %} ```json { "id": "o3-mini", "type": "chat-completion", "info": { "name": "o3 mini", "developer": "Open AI", "description": "OpenAI o3-mini excels in reasoning tasks with advanced features like deliberative alignment and extensive context support.", "contextLength": 200000, "maxTokens": 100000, "url": "https://aimlapi.com/models/openai-o3-mini-api", "docs_url": "https://docs.aimlapi.com/api-references/text-models-llm/openai/o3-mini" }, "features": [ "openai/chat-completion", "openai/response-api", "openai/chat-assistant", "openai/chat-completion.function", "openai/chat-completion.message.refusal", "openai/chat-completion.message.system", "openai/chat-completion.message.developer", "openai/chat-completion.message.assistant", "openai/chat-completion.stream", "openai/chat-completion.max-completion-tokens", "openai/chat-completion.number-of-messages", "openai/chat-completion.stop", "openai/chat-completion.seed", "openai/chat-completion.reasoning", "openai/chat-completion.response-format" ], "endpoints": [ "/v1/chat/completions", "/v1/responses" ] } ``` {% endcode %} #### Example output item for an image model {% code overflow="wrap" %} ```json { "id": "flux/kontext-max/text-to-image", "type": "image", "info": { "name": "Flux Kontext Max", "developer": "Flux", "description": "A new Flux model optimized for maximum image quality.", "url": "https://aimlapi.com/models/flux-1-kontext-max", "docs_url": "https://docs.aimlapi.com/api-references/image-models/flux/flux-kontext-max-text-to-image" }, "features": [], "endpoints": [ "/v1/images/generations" ] } ``` {% endcode %} #### Example output item for a video model {% code overflow="wrap" %} ```json { "id": "veo2/image-to-video", "type": "video", "info": { "name": "Veo2 Image-to-Video", "description": "Veo2 Image-to-Video: Google's AI transforming still images into dynamic videos", "developer": "Google", "url": "https://aimlapi.com/models/veo-2-image-to-video-api", "docs_url": "https://docs.aimlapi.com/api-references/video-models/google/veo2-image-to-video" }, "features": [], "endpoints": [ "/v2/generate/video/google/generation", "/v2/video/generations" ] } ``` {% endcode %} --- # Source: https://docs.aimlapi.com/capabilities/completion-or-chat-models.md # Completion and Chat Completion This article describes two related capabilities of text models: **completion** and **chat completion**. The former, in its pure form, is now mostly relevant for research purposes and is not supported by our models. A list of models that support chat completion is provided at the end of this page. ## What is a Completion At a bare minimum, a text model is a large mathematical model trained to fulfill a single task: predicting the next token or character. This process is called **completion** and you will often encounter this term throughout your journey. For example, when using the completion text model `gpt-3.5-turbo-instruct`, you can provide an initial prompt to the model: ``` A long time ago, there were three princesses in a distant kingdom: ``` Running the model might yield the following output: {% code overflow="wrap" %} ``` A long time ago, there were three princesses in a distant kingdom: Princess Narcissa, who was beautiful but vain, Princess Rosa, who was kind and gentle, and Princess Aurora, who was strong and brave. The three sisters lived in a beautiful palace with their parents, the king and queen. ``` {% endcode %} This is a simple text completion. However, when training datasets become larger and are refined by human alignments, we can achieve truly AI-like results that even researchers did not initially anticipate. ## What is a Chat Completion To make text models useful in code and applications beyond generating arbitrary creative information, the model needs to be pretrained to return data in a specific format. Usually, using a text model feels like a chatting experience: you ask something in a certain role, and you get your answer as if it's from someone in another role. With this in mind, model providers train their models and feed their initial training data with some metadata, such as roles. This allows the model to respond in a certain format and be used in many complex applications. For example, the model training data might look like the following: {% code overflow="wrap" %} ```json5 USER: What's the color of the sky? ASSISTANT: The color of the sky can vary depending on several factors, but it is most commonly perceived as blue during the daytime. USER: What was the theme we discussed in the previous sentence? ASSISTANT: The theme of the previous sentence centered around the color of the sky. ``` {% endcode %} The above data is written in a chat-like conversation format. The training dataset contains a huge amount of these conversations, and during the training process, the model learns the relationships between words and characters, enabling it to return them in the same predictable format. After generating data, a subsystem parses this information and returns it in a format that can easily be handled by your code, such as the following JSON: ```json [ { "message": "Hi!", "role": "user" }, { "message": "Hi, how can I help you?", "role": "assistant" } ] ``` ### What roles exist There are several roles frequently used in chat models. The system role usually appears once, while other roles can appear multiple times: * **System**: The main instruction about formatting, rules, and acting. * **Assistant**: The model's response. * **User**: The user's content. * **Tool**: Response for external tools that can be used by the model. Using these roles, you can create complex behaviors and protect your AI from misleading use by user content. ## Models That Support Chat Completion Any [text chat model](https://docs.aimlapi.com/api-references/model-database#text-models-llm) supports this capability. *** --- # Source: https://docs.aimlapi.com/glossary/concepts.md # Concepts ## API API stands for *Application Programming Interface*. In the context of AI/ML, an API serves as a "handle" that enables you to integrate and utilize any Machine Learning model within your application. Our API supports communication via HTTP requests and is fully backward-compatible with OpenAI’s API. This means you can refer to OpenAI’s documentation for making calls to our API. However, be sure to change the base URL to direct your requests to our servers and select the desired model from our offerings. ## API Key An *API Key* is a credential that grants you access to our API from within your code. It is a sensitive string of characters that should be kept confidential. Do not share your API key with anyone else, as it could be misused without your knowledge. You can find your API key on the [account page](https://aimlapi.com/app/keys). ## Base URL The Base URL is the first part of the URL (including the protocol, domain, and pathname) that determines the server responsible for handling your request. It’s crucial to configure the correct Base URL in your application, especially if you are using SDKs from OpenAI, Azure, or other providers. By default, these SDKs are set to point to their servers, which are not compatible with our API keys and do not support many of the models we offer. Our base URL also supports versioning, so you can use the following as well: * `https://api.aimlapi.com` * `https://api.aimlapi.com/v1` Usually, you pass the base URL as the same field inside the SDK constructor. In some cases, you can set the environment variable `BASE_URL`, and it will work. If you want to use the OpenAI SDK, then follow the [setting up article](https://docs.aimlapi.com/quickstart/setting-up) and take a closer look at how to use it with the AI/ML API. ## Base64 Base64 is a way to encode binary data, such as files or images, into text format, making it safe to include in places like URLs or JSON requests. In the context of working with AI models, this means that if a model expects a parameter like `file_data` or `image_url`, you can encode your local file or image as a Base64 string, pass it as the value for that parameter, and in most cases, the model will successfully receive and process your file. You’ll need to import the `base64` library to handle file encoding. Below is a code example showing a real model call.
Code Example (Python): Providing an Image as a Base64 String We'll send an image file from the local disk to the chat model by passing it through the `image_url` parameter as a Base64-encoded string. Our prompt will ask [**gpt-4o**](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) chat model to describe the contents of the image with the question: `"What's in this image?"`
from openai import OpenAI
from pathlib import Path
import base64

# loading the picture
file_path = Path("C:/Users/user/Documents/example/images/racoons_0.png")

# Read and encode the image in base64
with open(file_path, "rb") as image_file:
    base64_image = base64.b64encode(image_file.read()).decode("utf-8")

# Create a data URL for the base64 image
image_data_url = f"data:image/png;base64,{base64_image}"

# Define an OpenAI client to call the model via OpepAI SDK
base_url = "https://api.aimlapi.com/"
api_key = "<YOUR_AIMLAPI_KEY>"

client = OpenAI(api_key=api_key, base_url=base_url)

# Send the image as Base64 to GPT-4o chat model
completion = client.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "user", "content": "What’s in this image?"},
            {
                "role": "user", "content":[ 
                    {
                        "type": "image_url",
                        "image_url": {
                            "url": image_data_url
                         }
                    }
                ]
            }

        ],
    )

response = completion.choices[0].message.content
print(response)
**Response**: {% code overflow="wrap" %} ``` The image depicts an illustrated raccoon by a stream, reaching into the water with its paw. The setting is natural, with rocks and greenery surrounding the stream. ``` {% endcode %}
Code Example (Python): Providing a PDF file as a Base64 String We'll pass a local [PDF file](https://drive.google.com/file/d/1Lktn3GHw9zyfY7vhZqzQRa6kYCpgViI3/view?usp=sharing) to the chat model via the `file_data` parameter, encoding it as a Base64 string. The prompt will ask [**gpt-4o**](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) chat model to extract and list all headers, one per line. {% code overflow="wrap" %} ```python import base64 from openai import OpenAI aimlapi_key = "" client = OpenAI( base_url = "https://api.aimlapi.com", api_key = aimlapi_key, ) def main(): # Put your filename here. The file must be in the same folder as your Python script. your_file_name = "headers-example.pdf" with open(your_file_name, "rb") as f: data = f.read() # We encode the entire file into a single string to send it to the model base64_string = base64.b64encode(data).decode("utf-8") response = client.chat.completions.create( model="gpt-4o", messages=[ { "role": "user", "content": [ { # Sending our file to the model "type": "file", "file": { "filename": your_file_name, "file_data": f"data:application/pdf;base64,{base64_string}", } }, { # Providing the model with instructions on how to process the uploaded file "type": "text", "text": "Extract all the headers from this file, placing each on a new line", }, ], }, ] ) print(response.choices[0].message.content) if __name__ == "__main__": main() ``` {% endcode %} **Response**: {% code overflow="wrap" %} ``` The Renaissance Era A New Dawn of Thought The Masters of Art Scientific Breakthroughs Legacy and Influence ``` {% endcode %}
## Deprecation Deprecation is the process where a provider marks a model, parameter, or feature as outdated and no longer recommended for use. Deprecated items may remain available for some time but are likely to be removed or unsupported in the future. Deprecation can apply to an entire model (see [our list of deprecated/no longer supported models](https://docs.aimlapi.com/api-references/model-database#deprecated-no-longer-supported-models)) or to individual parameters. For example, in a recent update to the video model [**v1.6-pro/image-to-video**](https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-image-to-video) by Kling AI, the `aspect_ratio` parameter was deprecated: the model now automatically determines the aspect ratio based on the properties of the provided reference image, and explicit `aspect_ratio` input is no longer required. Users are encouraged to monitor deprecation notices carefully and update their integrations accordingly. We notify our users about such changes in our email newsletters. ## Endpoint A specific URL where an API can be accessed to perform an operation (e.g., generate a response, upload a file). ## **Fine-tuned model** A fine-tuned model is a base AI model that has been further trained on additional, specific data to specialize it for certain tasks or behaviors. For example, an "[*11B Llama 3.2*](https://docs.aimlapi.com/api-references/moderation-safety-models/meta/llama-guard-3-11b-vision-turbo) *model fine-tuned for content safety*" means that the original Llama 3.2 model (with 11 billion parameters) has received extra training using datasets focused on safe and appropriate content generation. ## Multimodal Model A model that can process and generate different types of data (text, images, audio) in a single interaction. ## Prompt The input given to a model to generate a response. The parameter used to pass a prompt is most often called simply `prompt`:
Some Python code {% code overflow="wrap" %} ```python json={ "prompt": "slightly dim banner with abstract lines, base colors are coral, yellow and magenta", # a prompt used for image generation "model": "flux/schnell", "image_size": { "width": 1536, "height": 640 } ``` {% endcode %}
But there can be other variations. For example, the **messages** structure used in chat models passes the prompt within the **content** subfield. Depending on the value of the `role` parameter value, this prompt will be interpreted either as a user message (**role: user**) or as a model instruction (**role: system** or **role: assistant**).
Some Python code {% code overflow="wrap" %} ```python "messages":[ { "role":"system", "content":"you are a helpful assistant",#this prompt is an instruction "name":"text" }, { "role":"user", "content":"Why is the ocean salty?" #this prompt is a user question } ], ``` {% endcode %}
There are also special parameters that allow you to refine prompts, control how strongly the model should follow them, or adjust the strictness of their interpretation. * `prompt_optimizer` or `enhance_prompt`: The model will automatically optimize the incoming prompt to improve the video generation quality if necessary. For more precise control, this parameter can be set to `False`, and the model will follow the instructions more strictly. * `negative_prompt`: The description of elements to avoid in the generated video/image/etc. * `cfg_scale` or `guidance_scale`: The Classifier Free Guidance (CFG) scale is a measure of how close you want the model to stick to your prompt. * `strength`: Determines how much the prompt influences the generated image. Which of these parameters are supported by a specific model can be found in the API Schema section on that model's page. ## Terminal If you are not a developer or are using modern systems, you might be familiar with it only as a "black window for hackers." However, the terminal is a very old and useful way to communicate with a computer. The terminal is an app inside your operating system that allows you to run commands by typing strings associated with some program. Depending on the operating system, you can run the terminal in many ways. Here are basic ways that usually work: * **On Windows:** Press the combination `Win + R` and type `cmd`. * **On Mac:** Press `Command + Space`, search for *Terminal*, then hit `Enter`. * **On Linux:** You are probably already familiar with it. On Ubuntu with GUI, for example, you can type `Ctrl + F`, search for *Terminal*, then hit `Enter`. ## Token A chunk of text (word, part of a word, or symbol) that text models use for processing inputs and outputs. The cost of using a text model is calculated based on the number of tokens processed. Both the text documents you send and the conversation history (in the case of interacting with an [Assistant](https://docs.aimlapi.com/solutions/openai/assistants)) are tokenized (split into tokens) and included in the cost calculation. You can limit the model’s output using the `max_completion_tokens` parameter (the fully equivalent deprecated `max_tokens` parameter is still supported for now). --- # Source: https://docs.aimlapi.com/integrations/continue.dev.md # continue.dev ## About continue.dev is an open-source AI coding assistant that runs directly in your IDE (VS Code, JetBrains, etc). You can use AI/ML API models with Continue via the built-in OpenAI-compatible provider — no plugins required. ## Configuration You can configure Continue by editing either `~/.continue/config.json` or `~/.continue/config.yaml`. #### Option 1: `config.json` ```json { "models": [ { "name": "AI/ML API", "provider": "openai", "model": "gpt-3.5-turbo", "apiBase": "", "apiKey": "" } ] } ``` #### Option 2: `config.yaml` {% code overflow="wrap" %} ```yaml models: - name: AI/ML API provider: openai model: gpt-3.5-turbo apiBase: apiKey: ``` {% endcode %} ✅ `provider: openai` — uses Continue’s native OpenAI-compatible interface\ ✅ `apiBase` — must be ` for AI/ML API\ ✅ `model` — any valid model from our [text model list](https://docs.aimlapi.com/api-references/text-models-llm#complete-text-model-list). *** ### Advanced Options (Optional) To disable compression or configure extra body parameters, use requestOptions.extraBodyProperties. #### JSON Example ```json { "models": [ { "name": "AI/ML API", "provider": "openai", "model": "gpt-3.5-turbo", "apiBase": "", "apiKey": "", "requestOptions": { "extraBodyProperties": { "transforms": [] } } } ] } ``` #### YAML Example ```yaml models: - name: AI/ML API provider: openai model: gpt-3.5-turbo apiBase: apiKey: requestOptions: extraBodyProperties: transforms: [] ``` ### GUI continue.dev provides an intuitive graphical interface for interacting with a chat model.
### Supported Models You can use any of [our text models](https://docs.aimlapi.com/api-references/text-models-llm#complete-text-model-list), including: * [gpt-3.5-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-4-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [claude-3-5-sonnet-20240521](https://github.com/aimlapi/api-docs/blob/main/docs/integrations/broken-reference/README.md) * [google/gemini-2.0-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash) * [and many others](https://docs.aimlapi.com/api-references/model-database). --- # Source: https://docs.aimlapi.com/use-cases/create-a-3d-model-from-an-image.md # Create a 3D Model from an Image ## Idea and Step-by-Step Plan Transforming a 2D image into a 3D model is a powerful way to bring static visuals to life. Whether you're working on a game, a product prototype, or just exploring creative tools, this process helps bridge the gap between visual concepts and spatial design. In this tutorial, you'll learn how to go from a single image to a usable 3D model using readily available tools. No deep 3D modeling experience required — just a bit of patience and curiosity. 1. Prepare Your Image. Choose a clear image of the object you want to convert. Best results come from front-facing images with neutral backgrounds and good lighting. 2. Upload the image to Triposr model and wait for the AI to process it — this usually takes under a minute. 3. Download the 3D Model. You can now use the model in your web app, AR/VR project, or 3D viewer. ## Implementation Let's call the Triposr model and pass it a reference image in Python code. {% code overflow="wrap" %} ```python import requests def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "triposr", "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Fly_Agaric_mushroom_05.jpg/576px-Fly_Agaric_mushroom_05.jpg", }, ) response.raise_for_status() data = response.json() url = data["model_mesh"]["url"] file_name = data["model_mesh"]["file_name"] mesh_response = requests.get(url, stream=True) with open(file_name, "wb") as file: for chunk in mesh_response.iter_content(chunk_size=8192): file.write(chunk) if __name__ == "__main__": main() ``` {% endcode %} **Response**: For clarity, we took several screenshots of our mushroom from different angles in an online GLB viewer. As you can see, the model understands the shape, but preserving the pattern on the back side (which was not visible on the reference image) could be improved:
Compare them with the [reference image](https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Fly_Agaric_mushroom_05.jpg/576px-Fly_Agaric_mushroom_05.jpg):
{% hint style="info" %} Try to choose reference images where the target object is not obstructed by other objects and does not blend into the background. Depending on the complexity of the object, you may need to experiment with the resolution of the reference image to achieve a satisfactory mesh. {% endhint %} --- # Source: https://docs.aimlapi.com/use-cases/create-a-looped-gif-for-a-web-banner.md # Create a Looped GIF for a Web Banner ## Idea and Step-by-Step Plan In this use case, we create an animated banner by combining image generation, video animation, and basic editing. Here’s the plan: 1. **Generating a Reference Image**\ We are going to use one of our image models to create a picture based on a prompt (which is our business idea). 2. **Animating the Image**\ We will pass the generated image to a video model that creates a smooth, perfectly looped animation. 3. **Adjusting the Video Size**\ We will use a free online service to crop the video to the desired banner dimensions. 4. **Convert to GIF**\ We will transform the final video into a GIF format for easy integration into websites. ## Full Walkthrough 1. **Generating a Reference Image** We chose a very fast image model [**flux/schnell**](https://docs.aimlapi.com/api-references/image-models/flux/flux-schnell), provided a prompt for an abstract image ("*slightly dim banner with abstract lines, base colors are coral, yellow, and magenta*"), and specified dimensions as close as possible to the sizes we needed for the website. {% hint style="warning" %} Unfortunately, it's not always possible to simply set the exact dimensions we need due to model limitations. For example, most image models require that both height and width values be multiples of 32.\ Video models may have minimum and maximum input size restrictions, and sometimes specific requirements for the aspect ratio as well. {% endhint %}
Code (Python) {% code overflow="wrap" %} ```python import requests def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "slightly dim banner with abstract lines, base colors are coral, yellow and magenta", "model": "flux/schnell", "image_size": { "width": 1536, "height": 640 } } ) data = response.json() print("Generation:", data) if __name__ == "__main__": main() ``` {% endcode %}
Model response & Generated image preview {% code overflow="wrap" %} ```json5 Generation: {'images': [{'url': 'https://cdn.aimlapi.com/eagle/files/kangaroo/k6KvRjgHGF98TanFAf89x.png', 'width': 1536, 'height': 640, 'content_type': 'image/jpeg'}], 'timings': {'inference': 0.4465899569913745}, 'seed': 2166405766, 'has_nsfw_concepts': [False], 'prompt': 'slightly dim banner with abstract lines, base colors are coral, yellow and magenta'} ``` {% endcode %} Image preview:
2. **Animating the Image** Not all video models are capable of creating looped videos (where the last frame matches the first one). We chose model [kling-video/v1.6/pro/image-to-video](https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-image-to-video). It accepts the first and last frames separately, using the parameters `image_url` and `tail_image_url` respectively. For the video generation prompt, we used *"slow fluid-like motion of patterns of the image."*\ Feel free to experiment with effects, as long as they match the video looping! {% hint style="warning" %} Don't worry: the code below is long because it automates the process of requesting the ready video from the server every 10 seconds, so you don't have to do it manually. Enter your [AIMLAPI key](https://aimlapi.com/app/keys) in the second line, and all the necessary parameters are passed in the first function, `generate_video()`. {% endhint %}
Code (Python) {% code overflow="wrap" %} ```python base_url = "https://api.aimlapi.com" api_key = "" url = f"{base_url}/v2/generate/video/kling/generation" # Creating and sending a video generation task to the server def generate_video(): # Here's our image url input_url = "https://cdn.aimlapi.com/eagle/files/kangaroo/k6KvRjgHGF98TanFAf89x.png" payload = { "model": "kling-video/v1.6/pro/image-to-video", "image_url": input_url, # it will be the 1st video frame "tail_image_url": input_url, # it will be the last video frame "duration": 5, # Length of the generated video in seconds "prompt": "slow fluid-like motion of patterns of the image" } headers = {"Authorization": f"Bearer {api_key}", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/v2/generate/video/kling/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Gen_ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Model response & Generated video preview {% code overflow="wrap" %} ```json5 {'id': '36f1c11f-e0ab-4048-9b2d-60e413ebb64c:kling-video/v1.6/pro/image-to-video', 'status': 'queued'} Gen_ID: 36f1c11f-e0ab-4048-9b2d-60e413ebb64c:kling-video/v1.6/pro/image-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '36f1c11f-e0ab-4048-9b2d-60e413ebb64c:kling-video/v1.6/pro/image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/zebra/pmpSUqe0n-1Z1ysnob6vF_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 3643207}} ``` {% endcode %} Very small video preview:
3. **Adjusting the Video Size** For video cropping, we used the [free web service](https://ezgif.com/crop-video).
Settings Select the **Crop video** tab and enter the URL of your video.
Using the preset aspect ratios or manual settings, adjust the area of your video that you want to turn into a GIF banner. Then click **Set**, and after that — **Crop video**.
After a few seconds of processing, a window with the cropped video fragment will appear below. Right-click on it and select **Save**.
4. **Convert to GIF** For this step, we used another [free web service](https://www.freeconvert.com/convert/video-to-gif).
Settings Click **Choose Files** and upload your cropped video. After that, the output GIF settings will become available. Set your desired width.
Scroll down in the settings to find the GIF compression parameter. We set it to minimum for better image quality, but for larger videos, feel free to experiment with different values.\ Then click **Convert to GIF**.
All that's left is to upload the finished GIF file. In the next section, you can see it in action.
## Results
Animated Looped Web Banner
***
You can use such banners in the website header or overlay your promotional text on a transparent background in a website builder to make it look like a single element.\ Best of luck with your implementation! --- # Source: https://docs.aimlapi.com/use-cases/create-an-assistant-to-discuss-a-specific-document.md # Create an Assistant to Discuss a Specific Document ## Idea and Step-by-Step Plan Today, we’re going to create an AI [Assistant](https://docs.aimlapi.com/solutions/openai/assistants) that helps users engage with the content of a particular document. This Assistant can answer questions about the text, explain specific sections, find relevant parts, and even participate in discussions — for example, by offering arguments, clarifying ambiguous points, or helping formulate conclusions. It's especially useful when working with technical documentation, legal texts, research papers, or project documents. The following features need to be implemented: * Core Assistant functionality (ability to communicate with the user and respond accurately to questions using [Chat Completion](https://docs.aimlapi.com/capabilities/completion-or-chat-models) capability). * Document upload (TXT). * Streaming mode. {% hint style="success" %} You can read the step-by-step explanation below or jump straight to the [ready-to-use Python code](#full-code-example) at the bottom of this page.\ Make sure you have [your AIMLAPI key](https://aimlapi.com/app/keys)! {% endhint %} ## Step-by-Step Explanation ### 1. Preparing Input File As input, we used a `.txt` file with the following content and placed it in the same directory as our Python script. For testing, we created a simple file with recipes for three different dishes.
Input Text File (recipes.txt) {% code overflow="wrap" %} ``` 1. Sun-Dried Tomato & Garlic Pasta Prep Time: 25 minutes Servings: 2 Ingredients: • 200g spaghetti • 6–8 sun-dried tomatoes in oil • 2 garlic cloves • Olive oil — 2 tbsp • Salt — to taste • Black pepper — to taste • Fresh basil (optional) Required Kitchen Tools: • Large pot • Frying pan • Strainer • Cutting board & knife • Wooden spoon Instructions: 1. Boil a large pot of salted water and cook the spaghetti according to package instructions. 2. While the pasta cooks, finely chop the garlic and sun-dried tomatoes. 3. In a frying pan, heat olive oil over medium heat. Add garlic and cook for 30 seconds until fragrant. 4. Add sun-dried tomatoes and stir for 2–3 minutes. 5. Drain the pasta and toss it into the pan with the tomato-garlic mixture. 6. Mix well, season with salt and pepper, and garnish with fresh basil if desired. 7. Serve hot. 2. Chickpea & Avocado Salad Prep Time: 15 minutes Servings: 2 Ingredients: • 1 can of chickpeas (400g), drained and rinsed • 1 ripe avocado, diced • 1 small red onion, finely chopped • Juice of 1 lemon • Olive oil — 1 tbsp • Salt & pepper — to taste • Fresh parsley (optional) Required Kitchen Tools: • Mixing bowl • Cutting board & knife • Fork or spoon • Citrus squeezer (optional) Instructions: 1. In a bowl, combine chickpeas, diced avocado, and chopped red onion. 2. Squeeze in lemon juice and drizzle with olive oil. 3. Season with salt and pepper. 4. Toss everything gently to mix, trying not to mash the avocado. 5. Top with chopped parsley if desired. 6. Serve immediately or chill for 10 minutes. 3. Quick Oatmeal Banana Cookies Prep Time: 10 minutes Bake Time: 15 minutes Servings: ~12 cookies Ingredients: • 2 ripe bananas • 1 cup rolled oats • 1/4 cup chocolate chips or chopped nuts (optional) • 1/2 tsp cinnamon (optional) Required Kitchen Tools: • Mixing bowl • Fork or potato masher • Baking tray • Parchment paper • Oven Instructions: 1. Preheat oven to 180°C (350°F). Line a baking tray with parchment paper. 2. In a bowl, mash the bananas until smooth. 3. Mix in oats and any add-ins like chocolate chips or cinnamon. 4. Scoop spoonfuls of the mixture onto the tray and flatten slightly. 5. Bake for 12–15 minutes until edges are golden. 6. Let cool for a few minutes before serving. ``` {% endcode %}
### 2. Core assistant functionality Assistants are a highly advanced way of working with chat models. If you have never worked with OpenAI Assistants before, we recommend reviewing the key concepts and structure of how Assistants operate in the [corresponding section](https://docs.aimlapi.com/solutions/openai/assistants#main-entities-in-assistants-workflow). Below, in the expandable sections, you can see a still quite basic example of creating a working Assistant, and a little further down, an example of a conversation with it in the console. To exit, you need to type `exit` or `quit`. Please note: this example is written without using the [streaming mode](https://docs.aimlapi.com/capabilities/streaming-mode), which means the Assistant does not provide an answer word by word, but first forms it completely, and then the entire response appears in the console at once.
Simple Example with the Core Assistant Functionality ```python import openai from openai import OpenAI # Connect to OpenAI API client = OpenAI( api_key="", base_url="https://api.aimlapi.com/" ) # Create an assistant my_assistant = client.beta.assistants.create( instructions="You are a helpful assistant.", name="AI Assistant", model="gpt-4o", # Specify the model ) assistant_id = my_assistant.id # Store assistant ID thread = client.beta.threads.create() # Create a new thread thread_id = thread.id # Store the thread ID def initial_request(): client.beta.threads.messages.create( thread_id=thread.id, role="user", content="Hi! Let's chat!", ) def send_message(user_message): """Send a message to the assistant and receive a full response""" if not user_message.strip(): print("⚠️ Message cannot be empty!") return # Add the user's message to the thread client.beta.threads.messages.create( thread_id=thread_id, role="user", content=user_message ) # Start a new run and wait for completion run = client.beta.threads.runs.create_and_poll( thread_id=thread_id, assistant_id=assistant_id, instructions="Keep responses concise and clear." ) # Check if the run was successful if run.status == "completed": # Retrieve messages from the thread messages = client.beta.threads.messages.list(thread_id=thread_id) # Find the last assistant message for message in reversed(messages.data): if message.role == "assistant": print() # Add an empty line for spacing print(f"assistant > {message.content[0].text.value}") return print("⚠️ Error: Failed to get a response from the assistant.") # Main chat loop initial_request() print("🤖 AI Assistant is ready! Type 'exit' to quit.") while True: user_input = input("\nYou > ") if user_input.lower() in ["exit", "quit"]: print("👋 Chat session ended. See you next time!") break send_message(user_input) ```
Interaction Example {% code overflow="wrap" %} ``` 🤖 AI Assistant is ready! Type 'exit' to quit. You > Hi! What could we discuss today? assistant > Hi there! We could chat about a wide range of topics. Here are a few options: Current events or news updates. Technology advancements. Book or movie recommendations. Travel destinations. Hobbies or personal interests. Let me know what you’re interested in! You > Cool! Okay, maybe next time! Bye! assistant > Goodbye! If you have more questions in the future, feel free to ask. Have a great day! 😊 You > exit 👋 Chat session ended. See you next time! ``` {% endcode %}
### 3. Let's Add a File to Discuss! Since we want to immediately start discussing the file contents with the Assistant upon launch, we need to pass it to the Assistant in advance, directly in the code.\ First, we will open our .txt file using Python’s built-in mechanism and pass the file ID in the first user message created directly from the code. The text of this initial message will be set as follows: "*Here's my .txt file — extract the text, read through it, and let me know when you're ready to start answering my questions about this document.*"
File uploading {% code overflow="wrap" %} ```python file = client.files.create( file=open("recipes.txt", "rb"), purpose='assistants' ) print(file) ``` {% endcode %}
Creating the first message in code with attaching the file {% code overflow="wrap" %} ```python # First message with file attachment client.beta.threads.messages.create( thread_id=thread.id, role="user", content="Here's my .txt file — extract the text, read through it, and let me know when you're ready to start answering my questions about this document.", attachments=[ { "file_id": txt_id, "tools": [{"type": "file_search"}] } ] ) ``` {% endcode %}
### 4. Add Streaming Mode For a more dynamic interaction, the established practice when communicating with online AI chats is now [streaming mode](https://docs.aimlapi.com/capabilities/streaming-mode), where the model's response appears on the user's screen word by word as it is being formed. Let's add this feature to our Assistant as well.
Explanation **How to handle events** To do this, we will use the pre-built `EventHandler` class from the `AssistantEventHandler` library. ```python from openai import AssistantEventHandler ``` Creating the handler: {% code overflow="wrap" %} ```python # Custom event handler to stream assistant responses class EventHandler(AssistantEventHandler): def on_text_created(self, text): print("\nassistant >", end="", flush=True) def on_text_delta(self, delta, snapshot): print(delta.value, end="", flush=True) def on_tool_call_created(self, tool_call): print(f"\nassistant > {tool_call.type}\n", flush=True) def on_tool_call_delta(self, delta, snapshot): if delta.type == 'file_search': if delta.file_search.input: print(delta.file_search.input, end="", flush=True) if delta.file_search.outputs: print(f"\n\noutput >", flush=True) for output in delta.file_search.outputs: if output.type == "logs": print(f"\n{output.logs}", flush=True) ``` {% endcode %} **What events are handled** `on_text_created(self, text)` Triggered when the Assistant creates a text response. The code simply prints `assistant >` to indicate the beginning of the output. `on_text_delta(self, delta, snapshot)` Triggered when new parts of text (tokens) arrive. The code prints each new word or letter to the console without a newline (end=""), creating the effect of the text appearing gradually. `on_tool_call_created(self, tool_call)` Triggered if the Assistant decides to use one of the tools (e.g., Code Interpreter or external APIs). The code simply prints the type of the invoked tool. `on_tool_call_delta(self, delta, snapshot)` Triggered when the assistant sends data to a tool or receives a result from it. *** **How it works** When the Assistant starts forming a response, it triggers `on_text_created`. Then, as tokens are generated, `on_text_delta` is triggered, updating the text in real time. If a tool is used in the response, `on_tool_call_created` is triggered, followed by `on_tool_call_delta` to show the process of the tool handling the data.
In the next section, you will find the ready-made code for creating an Assistant, passing it an input file, and interacting with it in streaming mode. At the end of the page, you will also find the listing of our conversation with this Assistant. ## Full Code Example
Code {% code overflow="wrap" %} ```python import openai from openai import OpenAI from openai import AssistantEventHandler client = openai.OpenAI( base_url="https://api.aimlapi.com/", # Replace with your AIMLAPI key api_key="" ) # Custom event handler to stream assistant responses class EventHandler(AssistantEventHandler): def on_text_created(self, text): print("\nassistant >", end="", flush=True) def on_text_delta(self, delta, snapshot): print(delta.value, end="", flush=True) def on_tool_call_created(self, tool_call): print(f"\nassistant > {tool_call.type}\n", flush=True) def on_tool_call_delta(self, delta, snapshot): if delta.type == 'file_search': if delta.file_search.input: print(delta.file_search.input, end="", flush=True) if delta.file_search.outputs: print(f"\n\noutput >", flush=True) for output in delta.file_search.outputs: if output.type == "logs": print(f"\n{output.logs}", flush=True) # Upload the TXT file file = client.files.create( file=open("recipes.txt", "rb"), purpose="assistants" ) txt_id = file.id # Create the assistant with appropriate instructions assistant = client.beta.assistants.create( name="Recipe Assistant", instructions="You assist with questions based on the information from the file that was uploaded to you.", tools=[{"type": "file_search"}], model="gpt-4o" # Specify the model ) # Create a thread for the conversation thread = client.beta.threads.create() # First message with file attachment client.beta.threads.messages.create( thread_id=thread.id, role="user", content="Here's my .txt file — extract the text, read through it, and let me know when you're ready to start answering my questions about this document.", attachments=[ { "file_id": txt_id, "tools": [{"type": "file_search"}] } ] ) # Start the initial run and stream the response with client.beta.threads.runs.stream( thread_id=thread.id, assistant_id=assistant.id, event_handler=EventHandler() ) as stream: stream.until_done() # Start the interactive loop for further conversation while True: user_input = input("\nYou > ") if user_input.lower() in ["exit", "quit"]: break client.beta.threads.messages.create( thread_id=thread.id, role="user", content=user_input ) with client.beta.threads.runs.stream( thread_id=thread.id, assistant_id=assistant.id, event_handler=EventHandler() ) as stream: stream.until_done() ``` {% endcode %}
Conversation Example {% code overflow="wrap" %} ``` assistant > file_search assistant >I've accessed the file you uploaded and am ready to assist you with any questions you have about its content. Please feel free to ask! You > Glad to hear that! Could you briefly summarize what kind of information is contained in this document? assistant > file_search assistant >The document you uploaded contains recipes for three different dishes. Here's a brief summary of each: 1. **Sun-Dried Tomato & Garlic Pasta**: This recipe involves cooking spaghetti and then mixing it with a sautéed mixture of garlic and sun-dried tomatoes. It also includes optional garnishing with fresh basil. 2. **Chickpea & Avocado Salad**: This is a quick salad recipe that combines chickpeas, avocado, and red onion, dressed with lemon juice and olive oil. Optional parsley can be added for garnish. 3. **Quick Oatmeal Banana Cookies**: A simple cookie recipe that uses ingredients like ripe bananas and rolled oats, with optional additions like chocolate chips or nuts. The cookies are baked until golden brown【9:0†recipes.txt】. Feel free to ask any further questions you may have about this document! You > What's the fastest dish to make? assistant >The fastest dish to make from the document is the **Chickpea & Avocado Salad**. It requires no cooking and is primarily a matter of combining ingredients like chickpeas, avocado, red onion, lemon juice, and olive oil. The preparation time is minimal, especially if you have canned chickpeas which just need to be rinsed【9:0†recipes.txt】. You > Thanx! is there smth else vegan? assistant >Yes, aside from the Chickpea & Avocado Salad, the **Sun-Dried Tomato & Garlic Pasta** is also vegan. It consists of simple ingredients like spaghetti, garlic, and sun-dried tomatoes, with optional basil for garnishing【9:0†recipes.txt】. You > Awesome, I'm off to the store to get some garlic! By the way, what do your notes mean? like "【9:0†recipes.txt】" assistant >The notes like "【9:0†recipes.txt】" are citations that refer back to the specific part of the document where the information was found. The numbers indicate: - `9`: The index of the message or query where this information was accessed. - `0`: The specific result in the search query result list. - `recipes.txt`: The name of the document from which the information was extracted. These citations help keep track of sources for extracted information. You > Cool, you're honestly really easy to work with! Thanks and see you soon! assistant >Thank you for your kind words! I'm glad I could help. Don't hesitate to return if you have more questions in the future. Have a great time cooking and see you soon! You > exit ``` {% endcode %}
*** Copy the code, insert your AIMLAPI key, specify the path to your document in the code, and give it a try yourself! --- # Source: https://docs.aimlapi.com/use-cases/create-images-illustrate-an-article.md # Create Images: Illustrate an Article ## Idea and Step-by-Step Plan This workflow allows you to generate an illustration based on a piece of text. 1. **Provide the Text Snippet**\ Input a short descriptive or narrative passage. 2. **Choose a Chat Model and Generate a Prompt for an Image Model**\ Select a language model (e.g., [GPT-4](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4), [GPT-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o)) that will process the text and suggest a visual interpretation. Ask the chat model to prepare a visual prompt. If needed, tweak the generated prompt before sending it to the image model. Then, pass that prompt to an image generation model (e.g., [DALL·E](https://docs.aimlapi.com/api-references/image-models/openai/dall-e-3)). 3. **Generate the Image**\ Use the selected image model to produce the final illustration and insert it into the text. ## Full Walkthrough 1. **Provide the Text Snippet**\ As a text example, we'll provide the following one:
Expand ***Futuristic Cities*** *Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic.* *Transportation will be fully autonomous and silent. Streets will be freed from traffic and pollution, with ground-level space given back to pedestrians and greenery. Drones, magnetic levitation pods, and underground tunnels will handle most transit. Artificial intelligence will manage traffic flow and energy distribution in real time, ensuring maximum efficiency and comfort.* *Digital technology will be woven into every part of urban life. Smart homes will adapt to residents’ habits, while city services will respond instantly to citizen needs. Virtual and augmented reality will blur the line between physical and digital spaces. These cities won’t just be places to live—they’ll be flexible, sustainable environments where technology truly serves people.*
2. **Choose a Chat Model and Generate a Prompt for an Image Model**\ We decided to use the GPT-4o chat model to generate the prompt. As input, we’ll provide it with a brief instruction: `"Read this article and generate a short prompt for illustration generation (no need to output the words like Prompt):"` along with our text snippet from the previous step. {% code overflow="wrap" %} ```python from openai import OpenAI def complete_chat(): # Insert your AIML API Key instead of : api_key = '' client = OpenAI( base_url='https://api.aimlapi.com', api_key=api_key, ) response = client.chat.completions.create( model="gpt-4o", messages=[ { "role": "user", "content": "Read this article and generate a short prompt for illustration generation (no need to output the words like Prompt): Futuristic Cities. Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. Transportation will be fully autonomous and silent. Streets will be freed from traffic and pollution, with ground-level space given back to pedestrians and greenery. Drones, magnetic levitation pods, and underground tunnels will handle most transit. Artificial intelligence will manage traffic flow and energy distribution in real time, ensuring maximum efficiency and comfort. Digital technology will be woven into every part of urban life. Smart homes will adapt to residents’ habits, while city services will respond instantly to citizen needs. Virtual and augmented reality will blur the line between physical and digital spaces. These cities won’t just be places to live—they’ll be flexible, sustainable environments where technology truly serves people.", }, ], ) print(response.choices[0].message.content) if __name__ == "__main__": complete_chat() ``` {% endcode %}
Response {% code overflow="wrap" %} ``` A vibrant illustration of a futuristic cityscape featuring sleek vertical skyscrapers blending residential, work, and public spaces into cohesive ecosystems. Highlight eco-friendly architecture with integrated solar panels, wind turbines, and energy harvested from foot traffic. Show autonomous vehicles, including drones and magnetic levitation pods, gracefully gliding through the air and sleek underground tunnels, while lush greenery and pedestrian-friendly pathways replace conventional streets. Incorporate AI-managed digital interfaces in homes and public spaces, with augmented reality elements blurring physical and digital boundaries, creating a harmonious, tech-driven urban environment. ``` {% endcode %}
3. **Generate the Image** Using the supporting Text-to-Image [**flux-pro**](https://docs.aimlapi.com/api-references/image-models/flux/flux-pro) model from [Flux](https://docs.aimlapi.com/api-references/image-models/flux): {% code overflow="wrap" %} ```python import requests def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": """ A vibrant illustration of a futuristic cityscape featuring sleek vertical skyscrapers blending residential, work, and public spaces into cohesive ecosystems. Highlight eco-friendly architecture with integrated solar panels, wind turbines, and energy harvested from foot traffic. Show autonomous vehicles, including drones and magnetic levitation pods, gracefully gliding through the air and sleek underground tunnels, while lush greenery and pedestrian-friendly pathways replace conventional streets. Incorporate AI-managed digital interfaces in homes and public spaces, with augmented reality elements blurring physical and digital boundaries, creating a harmonious, tech-driven urban environment. """, "model": "flux-pro", 'image_size': { "width": 1024, "height": 320 } } ) response.raise_for_status() data = response.json() print("Generation:", data) if __name__ == "__main__": main() ``` {% endcode %}
Response & Generated Image {% code overflow="wrap" %} ```json5 Generation: {'images': [{'url': 'https://cdn.aimlapi.com/squirrel/files/rabbit/Ip_fxJ-7WScVVNKOrAt11_6a31476ee9e44e74a831dfcec6e0cab3.jpg', 'width': 1024, 'height': 320, 'content_type': 'image/jpeg'}], 'timings': {}, 'seed': 550911681, 'has_nsfw_concepts': [False], 'prompt': '\nA vibrant illustration of a futuristic cityscape featuring sleek vertical skyscrapers blending residential, work, and public spaces into cohesive ecosystems. Highlight eco-friendly architecture with integrated solar panels, wind turbines, and energy harvested from foot traffic. Show autonomous vehicles, including drones and magnetic levitation pods, gracefully gliding through the air and sleek underground tunnels, while lush greenery and pedestrian-friendly pathways replace conventional streets. Incorporate AI-managed digital interfaces in homes and public spaces, with augmented reality elements blurring physical and digital boundaries, creating a harmonious, tech-driven urban environment.\n'} ``` {% endcode %} Image (preview):
## Results Let's insert the generated illustration into the text and check it out!
Illustrated Text ***Futuristic Cities*** *Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic.*
*Transportation will be fully autonomous and silent. Streets will be freed from traffic and pollution, with ground-level space given back to pedestrians and greenery. Drones, magnetic levitation pods, and underground tunnels will handle most transit. Artificial intelligence will manage traffic flow and energy distribution in real time, ensuring maximum efficiency and comfort.* *Digital technology will be woven into every part of urban life. Smart homes will adapt to residents’ habits, while city services will respond instantly to citizen needs. Virtual and augmented reality will blur the line between physical and digital spaces. These cities won’t just be places to live—they’ll be flexible, sustainable environments where technology truly serves people.*
--- # Source: https://docs.aimlapi.com/integrations/cursor.md # Cursor {% hint style="warning" %} Only versions 1.x are currently supported for integration.\ You can select one of them on [Cursor’s official download page](https://cursor.com/download). {% endhint %} ## About [Cursor](https://cursor.com/) is an advanced AI-powered IDE that combines intelligent code completion, inline explanations, and automatic code editing directly inside the editor. This guide explains how to connect **AI/ML API** to **Cursor** using the **Azure OpenAI-compatible** flow.\ You’ll get a clean setup with **one endpoint**, support for **slashes in deployment names**. ## 🚀 Quick Setup
FieldValue
Base URLhttps://api.aimlapi.com
API KeyYour AI/ML API key (create at aimlapi.com/app/keys)
Deploymentgoogle/gemini-2.5-pro (slashes allowed)
Alias (Model ID)gpt-4o (Bypasses the restriction and makes Cursor work with any model)
{% hint style="warning" %} Do **not** add `/v2/azure` or `/openai` to the Base URL. {% endhint %} *** ## ✅ Prerequisites * AI/ML API key * Cursor IDE (latest) * Internet access to `api.aimlapi.com` *** ## Installation & Configuration ### 1) Configure Cursor (Azure path) Open **Cursor → Settings → Models → Azure** and fill in: **Base URL** ``` https://api.aimlapi.com ``` **Deployment Name** ``` google/gemini-2.5-pro ``` **API Key** Paste your AI/ML API key exactly (avoid spaces). Click **Verify** to confirm.
*** ### 2) Keep the model picker clean In Cursor’s **Chat model selector**, only enable: ``` gpt-4o ``` This alias (Model ID) will send traffic to your deployment (`google/gemini-2.5-pro`).
*** ### 3) How Cursor calls AI/ML API Example request generated by Cursor: {% code overflow="wrap" %} ```http POST https://api.aimlapi.com/openai/deployments/google/gemini-2.5-pro/chat/completions?api-version=2024-12-01-preview Api-Key: Content-Type: application/json { "messages": [ { "role": "system", "content": "You are a helpful coding assistant." }, { "role": "user", "content": "Write a Python function that reverses a string." } ] } ``` {% endcode %} Notes: * `Deployment Name` is inserted into `/deployments//...`. * `api-version` is handled by Cursor automatically. * Base URL stays **exactly** `https://api.aimlapi.com`.
*** ### 4) Optional smoke test {% code overflow="wrap" %} ```bash curl -sS -X POST \ "https://api.aimlapi.com/openai/deployments/google/gemini-2.5-pro/chat/completions?api-version=2024-12-01-preview" \ -H "Api-Key: YOUR_AIMLAPI_KEY" \ -H "Content-Type: application/json" \ -d '{ "messages": [ {"role":"system","content":"You are a helpful coding assistant."}, {"role":"user","content":"Give me a one-line Python function to merge two dicts."} ] }' ``` {% endcode %} You should receive a JSON response with `choices[0].message.content`. *** ### 5) Common pitfalls * **Deployment not found** → Check Base URL & Deployment Name. * **Invalid API key** → Re-copy the key, ensure it’s in the Azure section. * **Wrong model list** → Toggle Azure off/on, click Verify, restart Cursor. * **Slashes in names** → Allowed in Deployment, but keep alias short (e.g. `gpt-4o`). *** ### 6) Tips for teams * Standardize the **alias (Model ID)** (`gpt-4o`) so everyone sees the same thing in Cursor. * Document your **Base URL + Deployment** in team wiki to avoid drift. * You can swap deployments later without changing the alias in UI. *** ### ✅ Summary (copy/paste) * **Base URL:** `https://api.aimlapi.com` * **API Key:** your AI/ML API key * **Deployment:** `google/gemini-2.5-pro` *(slashes allowed)* * **Alias (Model ID):** `gpt-4o` With this setup, Cursor talks to **AI/ML API** using the **Azure flow**, while you keep the UI clean and consistent --- # Source: https://docs.aimlapi.com/api-references/image-models/openai/dall-e-2.md # DALL·E 2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `dall-e-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced AI system designed to generate high-quality images and artwork from textual descriptions. It builds upon its predecessor, DALL·E 1, utilizing improved techniques to create images that are more realistic and contextually accurate. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["dall-e-2"]},"prompt":{"type":"string","maxLength":1000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"n":{"type":"number","minimum":1,"maximum":10,"default":1,"description":"The number of images to generate."},"size":{"type":"string","enum":["1024x1024","512x512","256x256"],"default":"1024x1024","description":"The size of the generated image."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."}},"required":["model","prompt"],"title":"dall-e-2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "dall-e-2" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'dall-e-2', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', quality: 'hd' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { created: 1756972085, data: [ { url: 'https://oaidalleapiprodscus.blob.core.windows.net/private/org-5drZvxmo1TGoMx2jeKKGAGSh/user-eKr1xiaNRxSYqgKrXfgZzSAJ/img-lrG5yb73YupujdUiDfx1sUpo.png?st=2025-09-04T06%3A48%3A05Z&se=2025-09-04T08%3A48%3A05Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=0e2a3d55-e963-40c9-9c89-2a1aa28cb3ac&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2025-09-04T03%3A29%3A29Z&ske=2025-09-05T03%3A29%3A29Z&sks=b&skv=2024-08-04&sig=5mTzRo50JWr%2BuoqSOAXW9WZ0%2Bak93/rMwp2sZo3sLYE%3D' } ] } ``` {% endcode %}
We obtained the following 1024x1024 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/image-models/openai/dall-e-3.md # DALL·E 3 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `dall-e-3` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This model represents a significant leap forward in AI-driven image creation, capable of generating images from text inputs. This model processes prompts with enhanced neural network architectures, resulting in images that are not only relevant but also rich in detail and diversity. DALL·E 3's deep learning techniques analyze and understand complex descriptions, allowing for the generation of visuals across a wide range of styles and subjects. You can also view [a detailed comparison of this model](https://aimlapi.com/comparisons/flux-1-vs-dall-e-3) on our main website. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["dall-e-3"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"n":{"type":"number","enum":[1],"default":1,"description":"The number of images to generate."},"quality":{"type":"string","enum":["standard","hd"],"default":"standard","description":"The quality of the image that will be generated."},"size":{"type":"string","enum":["1024x1024","1024x1792","1792x1024"],"default":"1024x1024","description":"The size of the generated image."},"style":{"type":"string","enum":["vivid","natural"],"default":"vivid","description":"The style of the generated images."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."}},"required":["model","prompt"],"title":"dall-e-3"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "dall-e-3", "quality": "hd" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'dall-e-3', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', quality: 'hd', }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %} {% hint style="info" %} Note that the model applies automatic prompt enhancement, and this behavior cannot be disabled. The enhanced prompt is also returned in the response (see the `revised_prompt` parameter in the response). {% endhint %}
Response {% code overflow="wrap" %} ```json5 { created: 1756973055, data: [ { revised_prompt: 'A massive T-Rex is taking a well-deserved vacation at a tranquil beach. The charismatic dinosaur lies leisurely on a large, comfortable sun lounger. Its tiny, clawed hands hold a pair of fashionable sunglasses in place over its sharp, menacing eyes, adding an air of humor to the otherwise intimidating figure. The soothing sound of the waves and the gentle warmth of the sun create a calming atmosphere around the chilling predator, lending the scene an amusing contradiction.', url: 'https://oaidalleapiprodscus.blob.core.windows.net/private/org-5drZvxmo1TGoMx2jeKKGAGSh/user-eKr1xiaNRxSYqgKrXfgZzSAJ/img-B7BCSmDWQgWlGA2vu24HSzqS.png?st=2025-09-04T07%3A04%3A15Z&se=2025-09-04T09%3A04%3A15Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=38e27a3b-6174-4d3e-90ac-d7d9ad49543f&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2025-09-04T02%3A45%3A18Z&ske=2025-09-05T02%3A45%3A18Z&sks=b&skv=2024-08-04&sig=fGRfHnpFybyg6wwJw7PYXJKM1AF1NWwD/W5qPKIha7U%3D' } ] } ``` {% endcode %}

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/deepgram.md # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/deepgram.md # Deepgram - [nova-2](/api-references/speech-models/speech-to-text/deepgram/nova-2.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-chat-v3.1.md # DeepSeek Chat V3.1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-chat-v3.1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview August 2025 update of the [DeepSeek V3](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-chat) non-reasoning model. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-chat-v3.1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"deepseek/deepseek-chat-v3.1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-chat-v3.1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-chat-v3.1', messages:[{ role:'user', content: 'Hello'} // Insert your question instead of Hello ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "c13865eb-50bf-440c-922f-19b1bbef517d", "system_fingerprint": "fp_feb633d1f5_prod0820_fp8_kvcache", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I assist you today? 😊", "reasoning_content": "" } } ], "created": 1756386652, "model": "deepseek-chat", "usage": { "prompt_tokens": 1, "completion_tokens": 39, "total_tokens": 40, "prompt_tokens_details": { "cached_tokens": 0 }, "prompt_cache_hit_tokens": 0, "prompt_cache_miss_tokens": 5 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-chat.md # DeepSeek V3 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek-chat` * `deepseek/deepseek-chat` * `deepseek/deepseek-chat-v3-0324` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} {% hint style="success" %} We provide the latest version of this model from **Mar 24, 2025**.\ All three IDs listed above refer to the same model; we support them for backward compatibility. {% endhint %} ## Model Overview DeepSeek V3 (or deepseek-chat) is an advanced conversational AI designed to deliver highly engaging and context-aware dialogues. This model excels in understanding and generating human-like text, making it an ideal solution for creating responsive and intelligent chatbots. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-chat","deepseek-chat","deepseek/deepseek-chat-v3-0324"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"deepseek-chat"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek-chat", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek-chat', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json {'id': 'gen-1744194041-A363xKnsNwtv6gPnUPnO', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello! 😊 How can I assist you today? Feel free to ask me anything—I'm here to help! 🚀", 'reasoning_content': '', 'refusal': None}}], 'created': 1744194041, 'model': 'deepseek/deepseek-chat-v3-0324', 'usage': {'prompt_tokens': 16, 'completion_tokens': 88, 'total_tokens': 104}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-non-reasoner-v3.1-terminus.md # Deepseek Non-reasoner V3.1 Terminus {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-non-reasoner-v3.1-terminus` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview September 2025 update of [the DeepSeek Chat V3.1](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-chat-v3.1) non-reasoning model. The model produces more consistent and dependable results. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-non-reasoner-v3.1-terminus"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"deepseek/deepseek-non-reasoner-v3.1-terminus"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-non-reasoner-v3.1-terminus", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-non-reasoner-v3.1-terminus', messages:[{ role:'user', content: 'Hello'} // Insert your question instead of Hello ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "cc8c3054-115d-4dac-9269-2abffcaabab5", "system_fingerprint": "fp_ffc7281d48_prod0820_fp8_kvcache", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I assist you today? 😊", "reasoning_content": "" } } ], "created": 1761036636, "model": "deepseek-chat", "usage": { "prompt_tokens": 3, "completion_tokens": 10, "total_tokens": 13, "prompt_tokens_details": { "cached_tokens": 0 }, "prompt_cache_hit_tokens": 0, "prompt_cache_miss_tokens": 5 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-prover-v2.md # DeepSeek Prover V2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-prover-v2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A massive 671B-parameter model, presumed to focus on logic and mathematics. It appears to be an upgrade over DeepSeek Prover V1.5. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-prover-v2"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"deepseek/deepseek-prover-v2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-prover-v2", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-prover-v2', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json {'id': 'gen-1747126732-rD70SgJEEBVBXPHmKlNJ', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello there! As a virtual assistant, I'm here to help you with a wide variety of tasks and questions. Here are some of the things I can do: \n \n1. Provide information on a wide range of topics, from science and history to pop culture and current events. \n2. Answer factual questions using my knowledge base. \n3. Assist with homework or research projects by providing explanations, summaries, and resources. \n4. Help with language-related tasks such as grammar, vocabulary, translations, and writing assistance. \n5. Engage in general conversation, discussing ideas, and providing opinions on various subjects. \n6. Offer advice or tips on various life situations, though not as a substitute for professional guidance. \n7. Perform calculations, solve math problems, and help with understanding mathematical concepts. \n8. Generate creative content like stories, poems, or song lyrics. \n9. Play interactive games, such as word games or trivia. \n10. Help you practice a language by conversing in it. \n \nFeel free to ask me anything, and I'll do my best to assist you!", 'reasoning_content': None, 'refusal': None}}], 'created': 1747126732, 'model': 'deepseek/deepseek-prover-v2', 'usage': {'prompt_tokens': 15, 'completion_tokens': 1021, 'total_tokens': 1036, 'prompt_tokens_details': None}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-r1.md # DeepSeek R1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-r1` * `deepseek-reasoner` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} {% hint style="success" %} Both IDs listed above refer to the same model; we support them for backward compatibility. {% endhint %} ## Model Overview DeepSeek R1 is a cutting-edge reasoning model developed by DeepSeek AI, designed to excel in complex problem-solving, mathematical reasoning, and programming assistance. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-r1","deepseek-reasoner"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."}},"required":["model","messages"],"title":"deepseek-reasoner"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-r1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-r1', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npPT68N-zqrih-92d94499ec25b74e', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': '\nHello! How can I assist you today? 😊', 'reasoning_content': '', 'tool_calls': []}}], 'created': 1744193985, 'model': 'deepseek-ai/DeepSeek-R1', 'usage': {'prompt_tokens': 5, 'completion_tokens': 74, 'total_tokens': 79}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.1-terminus.md # Deepseek Reasoner V3.1 Terminus {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-reasoner-v3.1-terminus` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview September 2025 update of [the DeepSeek Reasoner V3.1](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.1) model. The model produces more consistent and dependable results. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-reasoner-v3.1-terminus"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."}},"required":["model","messages"],"title":"deepseek/deepseek-reasoner-v3.1-terminus"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-reasoner-v3.1-terminus", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-reasoner-v3.1-terminus', messages:[{ role:'user', content: 'Hello'} // Insert your question instead of Hello ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "543f56cb-f59f-42cc-8ed7-8efdd72f185d", "system_fingerprint": "fp_ffc7281d48_prod0820_fp8_kvcache", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I assist you today? 😊", "reasoning_content": "" } } ], "created": 1761034613, "model": "deepseek-reasoner", "usage": { "prompt_tokens": 3, "completion_tokens": 98, "total_tokens": 101, "prompt_tokens_details": { "cached_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 99 }, "prompt_cache_hit_tokens": 0, "prompt_cache_miss_tokens": 5 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.1.md # DeepSeek Reasoner V3.1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-reasoner-v3.1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview August 2025 update of [the DeepSeek R1](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-r1) reasoning model. Skilled at complex problem-solving, mathematical reasoning, and programming assistance. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-reasoner-v3.1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."}},"required":["model","messages"],"title":"deepseek/deepseek-reasoner-v3.1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-reasoner-v3.1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-reasoner-v3.1', messages:[{ role:'user', content: 'Hello'} // Insert your question instead of Hello ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "ca664281-d3c3-40d3-9d80-fe96a65884dd", "system_fingerprint": "fp_feb633d1f5_prod0820_fp8_kvcache", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I help you today? 😊", "reasoning_content": "" } } ], "created": 1756386069, "model": "deepseek-reasoner", "usage": { "prompt_tokens": 1, "completion_tokens": 325, "total_tokens": 326, "prompt_tokens_details": { "cached_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 80 }, "prompt_cache_hit_tokens": 0, "prompt_cache_miss_tokens": 5 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.2-exp-non-thinking.md # DeepSeek V3.2 Exp Non-thinking {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-non-thinking-v3.2-exp` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview September 2025 update of the [DeepSeek V3](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-chat) non-reasoning model. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-non-thinking-v3.2-exp"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"deepseek/deepseek-non-thinking-v3.2-exp"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-non-thinking-v3.2-exp", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-non-thinking-v3.2-exp', messages:[ { role:'user', content: 'Hello' // Insert your question instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "ca664281-d3c3-40d3-9d80-fe96a65884dd", "system_fingerprint": "fp_feb633d1f5_prod0820_fp8_kvcache", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I help you today? 😊", "reasoning_content": "" } } ], "created": 1756386069, "model": "deepseek-reasoner", "usage": { "prompt_tokens": 1, "completion_tokens": 325, "total_tokens": 326, "prompt_tokens_details": { "cached_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 80 }, "prompt_cache_hit_tokens": 0, "prompt_cache_miss_tokens": 5 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.2-exp-thinking.md # DeepSeek V3.2 Exp Thinking {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-thinking-v3.2-exp` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview September 2025 update of [the DeepSeek R1](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-r1) reasoning model. Skilled at complex problem-solving, mathematical reasoning, and programming assistance. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-thinking-v3.2-exp"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."}},"required":["model","messages"],"title":"deepseek/deepseek-thinking-v3.2-exp"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-thinking-v3.2-exp", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-thinking-v3.2-exp', messages:[ { role:'user', content: 'Hello' // Insert your question instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "ca664281-d3c3-40d3-9d80-fe96a65884dd", "system_fingerprint": "fp_feb633d1f5_prod0820_fp8_kvcache", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I help you today? 😊", "reasoning_content": "" } } ], "created": 1756386069, "model": "deepseek-reasoner", "usage": { "prompt_tokens": 1, "completion_tokens": 325, "total_tokens": 326, "prompt_tokens_details": { "cached_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 80 }, "prompt_cache_hit_tokens": 0, "prompt_cache_miss_tokens": 5 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-v3.2-speciale.md # DeepSeek V3.2 Speciale {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `deepseek/deepseek-v3.2-speciale` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A high-compute variant of DeepSeek-V3.2 that outperforms GPT-5 and matches Gemini-3.0-Pro in reasoning benchmarks, achieving gold-medal-level results at the 2025 International Mathematical Olympiad (IMO) and the International Olympiad in Informatics (IOI). {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"DeepSeek-V3.2-Speciale - AI/ML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["deepseek/deepseek-chat-v3.1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"deepseek/deepseek-chat-v3.1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"deepseek/deepseek-v3.2-speciale", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'deepseek/deepseek-v3.2-speciale', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1770021770-coQRs5BE5oFW8jhEBDjN", "provider": "Parasail", "model": "deepseek/deepseek-v3.2-speciale", "object": "chat.completion", "created": 1770021770, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "stop", "index": 0, "message": { "role": "assistant", "content": "Hello! I think mankind is a fascinating species with incredible potential. Humans have shown remarkable creativity, empathy, and resilience throughout history, leading to extraordinary achievements in science, art, technology, and culture. At the same time, humanity faces complex challenges like inequality, conflict, and environmental issues. I believe that with collaboration, empathy, and innovation, people can overcome these obstacles and build a better future. As an AI, I’m inspired by the diversity of human thought and the drive to learn and grow. What are your thoughts on mankind?", "refusal": null, "reasoning": "We need to respond to the user's message: \"Hi! What do you think about mankind?\" This is a vague philosophical question. The assistant should respond in a friendly, thoughtful manner, perhaps with a positive and optimistic perspective. Could mention human potential, complexity, achievements, challenges, etc. Should avoid controversial or overly negative takes. Since it's an AI, it might also reflect on its own perspective as an AI. But the instruction is: we are ChatGPT, a helpful AI assistant. We should provide a thoughtful answer.\n\nWe can structure: Greet, then share thoughts: Humans are fascinating, capable of great creativity, compassion, and progress, but also have flaws and challenges. Emphasize hope for the future. Possibly mention interdependence, diversity, etc. Keep it concise and engaging.\n\nAlternatively, we could inject some humor? The user might be testing. But better to be sincere.\n\nLet's draft: \"Hello! I think mankind is an incredibly complex and fascinating species. Humans have shown remarkable capacity for creativity, empathy, and cooperation, leading to advancements in science, art, and society. At the same time, we face challenges like conflict and environmental issues. I believe in the potential for humans to learn, grow, and build a better future together. What are your thoughts?\" That's balanced.\n\nBut note: As an AI, we can also mention that we are designed to assist and learn from humans, so we have a positive view. Could incorporate that.\n\nLet's produce final answer.\n", "reasoning_details": [ { "format": "unknown", "index": 0, "type": "reasoning.text", "text": "We need to respond to the user's message: \"Hi! What do you think about mankind?\" This is a vague philosophical question. The assistant should respond in a friendly, thoughtful manner, perhaps with a positive and optimistic perspective. Could mention human potential, complexity, achievements, challenges, etc. Should avoid controversial or overly negative takes. Since it's an AI, it might also reflect on its own perspective as an AI. But the instruction is: we are ChatGPT, a helpful AI assistant. We should provide a thoughtful answer.\n\nWe can structure: Greet, then share thoughts: Humans are fascinating, capable of great creativity, compassion, and progress, but also have flaws and challenges. Emphasize hope for the future. Possibly mention interdependence, diversity, etc. Keep it concise and engaging.\n\nAlternatively, we could inject some humor? The user might be testing. But better to be sincere.\n\nLet's draft: \"Hello! I think mankind is an incredibly complex and fascinating species. Humans have shown remarkable capacity for creativity, empathy, and cooperation, leading to advancements in science, art, and society. At the same time, we face challenges like conflict and environmental issues. I believe in the potential for humans to learn, grow, and build a better future together. What are your thoughts?\" That's balanced.\n\nBut note: As an AI, we can also mention that we are designed to assist and learn from humans, so we have a positive view. Could incorporate that.\n\nLet's produce final answer.\n" } ] } } ], "usage": { "prompt_tokens": 13, "completion_tokens": 414, "total_tokens": 427, "cost": 0.000502, "is_byok": false, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "cost_details": { "upstream_inference_cost": 0.000502, "upstream_inference_prompt_cost": 5.2e-06, "upstream_inference_completions_cost": 0.0004968 }, "completion_tokens_details": { "reasoning_tokens": 388, "audio_tokens": 0 } }, "meta": { "usage": { "credits_used": 385 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/deepseek.md # DeepSeek - [DeepSeek V3](/api-references/text-models-llm/deepseek/deepseek-chat.md) - [DeepSeek R1](/api-references/text-models-llm/deepseek/deepseek-r1.md) - [DeepSeek Prover V2](/api-references/text-models-llm/deepseek/deepseek-prover-v2.md) - [DeepSeek Chat V3.1](/api-references/text-models-llm/deepseek/deepseek-chat-v3.1.md) - [DeepSeek Reasoner V3.1](/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.1.md) - [Deepseek Non-reasoner V3.1 Terminus](/api-references/text-models-llm/deepseek/deepseek-non-reasoner-v3.1-terminus.md) - [Deepseek Reasoner V3.1 Terminus](/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.1-terminus.md) - [DeepSeek V3.2 Exp Non-thinking](/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.2-exp-non-thinking.md) - [DeepSeek V3.2 Exp Thinking](/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.2-exp-thinking.md) - [DeepSeek V3.2 Speciale](/api-references/text-models-llm/deepseek/deepseek-v3.2-speciale.md) --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/elevenlabs/eleven_multilingual_v2.md # eleven\_multilingual\_v2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `elevenlabs/eleven_multilingual_v2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A high-quality text-to-speech model offering natural-sounding intonation, support for **29** languages, and a broad selection of built-in voices. A wide range of output audio formats and quality settings is also available. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["elevenlabs/eleven_multilingual_v2"]},"text":{"type":"string","description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["Rachel","Drew","Clyde","Paul","Aria","Domi","Dave","Roger","Fin","Sarah","Antoni","Laura","Thomas","Charlie","George","Emily","Elli","Callum","Patrick","River","Harry","Liam","Dorothy","Josh","Arnold","Charlotte","Alice","Matilda","James","Joseph","Will","Jeremy","Jessica","Eric","Michael","Ethan","Chris","Gigi","Freya","Santa Claus","Brian","Grace","Daniel","Lily","Serena","Adam","Nicole","Bill","Jessie","Sam","Glinda","Giovanni","Mimi"],"default":"Rachel","description":"Name of the voice to be used."},"apply_text_normalization":{"type":"string","enum":["auto","on","off"],"description":"This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped."},"next_text":{"type":"string","description":"The text that comes after the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation."},"previous_text":{"type":"string","description":"The text that came before the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation."},"output_format":{"type":"string","enum":["mp3_22050_32","mp3_44100_32","mp3_44100_64","mp3_44100_96","mp3_44100_128","mp3_44100_192","pcm_8000","pcm_16000","pcm_22050","pcm_24000","pcm_44100","pcm_48000","ulaw_8000","alaw_8000","opus_48000_32","opus_48000_64","opus_48000_96","opus_48000_128","opus_48000_192"],"description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."},"voice_settings":{"type":"object","properties":{"stability":{"type":"number","description":"Determines how stable the voice is and the randomness between each generation. Lower values introduce broader emotional range for the voice. Higher values can result in a monotonous voice with limited emotion."},"use_speaker_boost":{"type":"boolean","description":"This setting boosts the similarity to the original speaker. Using this setting requires a slightly higher computational load, which in turn increases latency."},"similarity_boost":{"type":"number","description":"Determines how closely the AI should adhere to the original voice when attempting to replicate it."},"style":{"type":"number","description":"Determines the style exaggeration of the voice. This setting attempts to amplify the style of the original speaker. It does consume additional computational resources and might increase latency if set to anything other than 0."},"speed":{"type":"number","description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."}},"description":"Voice settings overriding stored settings for the given voice. They are applied only on the given request."},"seed":{"type":"integer","description":"If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AI/ML API key instead of : "Authorization": "Bearer ", } payload = { "model": "elevenlabs/eleven_multilingual_v2", "text": ''' Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. ''', "voice": "Alice" } response = requests.post(url, headers=headers, json=payload, stream=True) # result = os.path.join(os.path.dirname(__file__), "audio.wav") # if you run this code as a .py file result = "audio.wav" # if you run this code in Jupyter Notebook with open(result, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", result) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const fs = require("fs"); // Insert your AI/ML API key instead of : const apiKey = ""; const data = JSON.stringify({ model: "elevenlabs/eleven_multilingual_v2", text: ` Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. `, voice: "Giovanni", }); const options = { hostname: "api.aimlapi.com", path: "/v1/tts", method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), } }; const req = https.request(options, (res) => { if (res.statusCode >= 400) { let error = ""; res.on("data", chunk => error += chunk); res.on("end", () => { console.error(`Error ${res.statusCode}:`, error); }); return; } const file = fs.createWriteStream("audio.wav"); res.pipe(file); file.on("finish", () => { file.close(); console.log("Audio saved to audio.wav"); }); }); req.on("error", (e) => { console.error("Request error:", e); }); req.write(data); req.end(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response ``` Audio saved to: audio.wav ```
{% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/music-models/elevenlabs/eleven_music.md # eleven\_music {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `elevenlabs/eleven_music` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced audio generation model designed to create high-quality audio tracks from textual prompts. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#full-example-generating-and-retrieving-the-video-from-the-server) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. {% hint style="success" %} Generating a music piece using this model involves sequentially calling two endpoints: * The first one is for creating and sending a music generation task to the server (returns a generation ID). * The second one is for requesting the generated piece from the server using the generation ID received from the first endpoint. The code example combines both endpoint calls. {% endhint %} :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Provide your instructions via the `prompt` parameter. The model will use them to generate a musical composition. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `prompt` is a required parameter for this model (and we’ve already filled it in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas) ("Generate a music sample"), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds 30 seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas ### Generate a music sample This endpoint creates and sends a music generation task to the server — and returns a generation ID and the task status. ## POST /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/generate/audio":{"post":{"operationId":"_v2_generate_audio","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["elevenlabs/eleven_music"]},"prompt":{"type":"string","maxLength":2000,"description":"A text description that can define the genre, mood, instruments, vocals, tempo, structure, and even lyrics of the track. It can be high-level (“peaceful meditation with voiceover”) or detailed (“solo piano in C minor, 90 BPM, raw and emotional”). Use keywords to control genre, emotional tone, vocals (e.g., a cappella, two singers harmonizing), structure (e.g., “lyrics begin at 15 seconds”), or provide custom lyrics directly in the prompt."},"music_length_ms":{"type":"integer","minimum":10000,"maximum":300000,"default":10000,"description":"The length of the song to generate in milliseconds. This parameter may not always be respected by the model, and the actual audio length can differ.","format":"milliseconds"}},"required":["model","prompt"],"title":"elevenlabs/eleven_music"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated music sample from the server After sending a request for music generation, this task is added to the queue. This endpoint lets you check the status of a audio generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `complete`, the response will include the final result — with the generated audio URL and additional metadata. ## GET /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/generate/audio":{"get":{"operationId":"_v2_generate_audio","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Audio From the Server The code below creates a audio generation task, then automatically polls the server every **10** seconds until it finally receives the audio URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import time import requests # Insert your AI/ML API key instead of : aimlapi_key = '' # Creating and sending an audio generation task to the server (returns a generation ID) def generate_audio(): url = "https://api.aimlapi.com/v2/generate/audio" payload = { "model": "elevenlabs/eleven_music", "prompt": "lo-fi pop hip-hop ambient music, slow intro: 10 s, then faster and with loud bass: 10 s", "music_length_ms": 20000, } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print("Generation: ", response_data) return response_data # Requesting the result of the generation task from the server using the generation_id: def retrieve_audio(gen_id): url = "https://api.aimlapi.com/v2/generate/audio" params = { "generation_id": gen_id, } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.get(url, params=params, headers=headers) return response.json() # This is the main function of the program. From here, we sequentially call the audio generation and then repeatedly request the result from the server every 10 seconds: def main(): generation_response = generate_audio() gen_id = generation_response.get("id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = retrieve_audio(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 10 seconds.") time.sleep(10) else: print("Generation complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a audio generation task to the server function generateAudio(callback) { const data = JSON.stringify({ model: "elevenlabs/eleven_music", prompt: "lo-fi pop hip-hop ambient music, slow intro: 10 s, then faster and with loud bass: 10 s", music_length_ms: 20000, }); const url = new URL(`${baseUrl}/generate/audio`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getAudio(genId, callback) { const url = new URL(`${baseUrl}/generate/audio`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates sound generation and checks the status every 10 seconds until completion or timeout function main() { generateAudio((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getAudio(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 10 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: {'status': 'queued', 'id': '60ac7c34-3224-4b14-8e7d-0aa0db708325:elevenlabs/eleven_music'} Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Generation complete:\n {'id': '60ac7c34-3224-4b14-8e7d-0aa0db708325:elevenlabs/eleven_music', 'status': 'completed', 'audio_file': {'url': 'https://cdn.aimlapi.com/generations/hippopotamus/1757963033314-8ca7729d-b78c-4d4c-9ef9-89b2fb3d07e8.mp3'}} ``` {% endcode %}
Listen to the track we generated: {% embed url="" %} "`lo-fi pop hip-hop ambient music, slow intro: 10 s, then faster and with loud bass: 10 s"` {% endembed %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/elevenlabs/eleven_turbo_v2_5.md # eleven\_turbo\_v2\_5 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `elevenlabs/eleven_turbo_v2_5` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A high-quality text-to-speech model offering natural-sounding intonation, support for **31** languages, and a broad selection of built-in voices. Up to 3× faster than [eleven\_multilingual\_v2](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/elevenlabs/eleven_multilingual_v2).\ A wide range of output audio formats and quality settings is also available. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["elevenlabs/eleven_turbo_v2_5"]},"text":{"type":"string","description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["Rachel","Drew","Clyde","Paul","Aria","Domi","Dave","Roger","Fin","Sarah","Antoni","Laura","Thomas","Charlie","George","Emily","Elli","Callum","Patrick","River","Harry","Liam","Dorothy","Josh","Arnold","Charlotte","Alice","Matilda","James","Joseph","Will","Jeremy","Jessica","Eric","Michael","Ethan","Chris","Gigi","Freya","Santa Claus","Brian","Grace","Daniel","Lily","Serena","Adam","Nicole","Bill","Jessie","Sam","Glinda","Giovanni","Mimi"],"default":"Rachel","description":"Name of the voice to be used."},"apply_text_normalization":{"type":"string","enum":["auto","on","off"],"description":"This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped."},"next_text":{"type":"string","description":"The text that comes after the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation."},"previous_text":{"type":"string","description":"The text that came before the text of the current request. Can be used to improve the speech's continuity when concatenating together multiple generations or to influence the speech's continuity in the current generation."},"output_format":{"type":"string","enum":["mp3_22050_32","mp3_44100_32","mp3_44100_64","mp3_44100_96","mp3_44100_128","mp3_44100_192","pcm_8000","pcm_16000","pcm_22050","pcm_24000","pcm_44100","pcm_48000","ulaw_8000","alaw_8000","opus_48000_32","opus_48000_64","opus_48000_96","opus_48000_128","opus_48000_192"],"description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."},"voice_settings":{"type":"object","properties":{"stability":{"type":"number","description":"Determines how stable the voice is and the randomness between each generation. Lower values introduce broader emotional range for the voice. Higher values can result in a monotonous voice with limited emotion."},"use_speaker_boost":{"type":"boolean","description":"This setting boosts the similarity to the original speaker. Using this setting requires a slightly higher computational load, which in turn increases latency."},"similarity_boost":{"type":"number","description":"Determines how closely the AI should adhere to the original voice when attempting to replicate it."},"style":{"type":"number","description":"Determines the style exaggeration of the voice. This setting attempts to amplify the style of the original speaker. It does consume additional computational resources and might increase latency if set to anything other than 0."},"speed":{"type":"number","description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."}},"description":"Voice settings overriding stored settings for the given voice. They are applied only on the given request."},"seed":{"type":"integer","description":"If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AI/ML API key instead of : "Authorization": "Bearer ", } payload = { "model": "elevenlabs/eleven_turbo_v2_5", "text": ''' Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. ''', "voice": "Nicole" } response = requests.post(url, headers=headers, json=payload, stream=True) # result = os.path.join(os.path.dirname(__file__), "audio.wav") # if you run this code as a .py file result = "audio.wav" # if you run this code in Jupyter Notebook with open(result, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", result) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const fs = require("fs"); // Insert your AI/ML API key instead of : const apiKey = ""; const data = JSON.stringify({ model: "elevenlabs/eleven_turbo_v2_5", text: ` Cities of the future promise to radically transform how people live, work, and move. Instead of sprawling layouts, we’ll see vertical structures that integrate residential, work, and public spaces into single, self-sustaining ecosystems. Architecture will adapt to climate conditions, and buildings will be energy-efficient—generating power through solar panels, wind turbines, and even foot traffic. `, voice: "Nicole", }); const options = { hostname: "api.aimlapi.com", path: "/v1/tts", method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), } }; const req = https.request(options, (res) => { if (res.statusCode >= 400) { let error = ""; res.on("data", chunk => error += chunk); res.on("end", () => { console.error(`Error ${res.statusCode}:`, error); }); return; } const file = fs.createWriteStream("audio.wav"); res.pipe(file); file.on("finish", () => { file.close(); console.log("Audio saved to audio.wav"); }); }); req.on("error", (e) => { console.error("Request error:", e); }); req.write(data); req.end(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response ``` Audio saved to: audio.wav ```
{% embed url="" %} Each voice in ElevenLabs models has its own accent and unique characteristics.\ Check out this amazing female whisper we stumbled upon completely by accident. {% endembed %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/voice-chat/elevenlabs.md # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/elevenlabs.md # Source: https://docs.aimlapi.com/api-references/music-models/elevenlabs.md # ElevenLabs - [eleven\_music](/api-references/music-models/elevenlabs/eleven_music.md) --- # Source: https://docs.aimlapi.com/integrations/elizaos.md # ElizaOS ## About [ElizaOS](https://eliza.how/docs/intro) is a powerful multi-agent simulation framework designed to create, deploy, and manage autonomous AI agents. Built with TypeScript, it provides a flexible and extensible platform for developing intelligent agents that can interact across multiple platforms while maintaining consistent personalities and knowledge. ## Installation 1. Install `bun` и `Node.js` (v18+) 2. Clone the repo and run: ```bash git clone cd eliza-starter cp .env.example .env bun i && bun run build && bun start ``` You can find more details in the [official documentation](https://eliza.how/docs/intro#installation). ## How to Use AIML API via ElizaOS 1. Define your [AIMLAPI key](https://aimlapi.com/app/keys) and other environment variables: ```bash AIMLAPI_API_KEY=sk-*** AIMLAPI_SMALL_MODEL=openai/gpt-3.5-turbo AIMLAPI_MEDIUM_MODEL=anthropic/claude-3-5-sonnet-20240521-v2:0 AIMLAPI_LARGE_MODEL=google/gemini-2.0-pro ``` 2. Configure your character in the `character.json` file as follows: ```json { "modelProvider": "aimlapi", "settings": { "model": "gpt-4", "maxInputTokens": 200000, ... } } ``` ElizaOS provides a UI at . Each configured character appears as a separate conversation partner in the left-hand panel:
Click the small speaker icon below any message to hear it read aloud:
## Our Supported Models In the environment variables for ElizaOS, you can specify almost any of our [text models](https://docs.aimlapi.com/api-references/text-models-llm#complete-text-model-list), including: * OpenAI ChatGPT ([openai/gpt-3.5-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo), [gpt-4-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo), ...), * Google Gemini ([google/gemini-2.0-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash), ...) --- # Source: https://docs.aimlapi.com/api-references/embedding-models.md # Embedding Models We support multiple embedding models. You can find the [complete list](#all-available-embedding-models) along with API reference links at the end of the page. ## What are Embeddings? Embeddings from AI/ML API quantify the similarity between text strings. These embeddings are particularly useful for: * **Search**: Rank search results by their relevance to a query. * **Clustering**: Group similar text strings together. * **Recommendations**: Suggest items based on related text strings. * **Anomaly Detection**: Identify outliers that differ significantly from the norm. * **Diversity Measurement**: Analyze the spread of similarities within a dataset. * **Classification**: Categorize text strings by comparing them to labeled examples. An embedding is a vector (list) of floating-point numbers, where the distance between vectors indicates their relatedness. Smaller distances indicate higher similarity, while larger distances suggest lower similarity. ## Pricing For more information on Embeddings pricing, visit our [pricing page](https://aimlapi.com/ai-ml-api-pricing). Costs are calculated based on the number of tokens in the input. ## **Example: Generating Embeddings** ```powershell curl https://api.aimlapi.com/v1/embeddings \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $AIMLAPI_API_KEY" \ -d '{ "input": "Your text string goes here", "model": "text-embedding-3-small" }' ``` The response will include the embedding vector and additional metadata:
Response ```json { "object": "list", "data": [ { "object": "embedding", "index": 0, "embedding": [ -0.006929283495992422, -0.005336422007530928, // ...(omitted for spacing) -4.547132266452536e-05, -0.024047505110502243 ] } ], "model": "text-embedding-3-small", "usage": { "prompt_tokens": 5, "total_tokens": 5 } } ```
By default, the length of the embedding vector is 1536 for `text-embedding-3-small` or 3072 for `text-embedding-3-large`. You can reduce the dimensions of the embedding using the `dimensions` parameter without losing its ability to represent concepts. More details on embedding dimensions can be found in the embedding use case section. ## Example in Python Here's how to use the embedding API in Python: {% tabs %} {% tab title="Python" %} ```python import openai # Initialize the API client client = openai.OpenAI( base_url="https://api.aimlapi.com/v1", api_key="", ) # Define the text for which to generate an embedding text = "Laura is a DJ." # Request the embedding response = client.embeddings.create( input=text, model="text-embedding-3-large" ) # Print the embedding print(response) ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.025269029662013054, -0.04147709161043167, -0.018974199891090393, 0.022547749802470207, -0.058941133320331573, -0.00012580781185533851, -0.02275707945227623, 0.007461091969162226, -0.011236494407057762, 0.05337895452976227, -0.006036905571818352, 0.02435695193707943, -0.041417285799980164, -0.011729913763701916, -0.014600714668631554, -0.06782267242670059, -0.03971274569630623, -0.010219752788543701, 0.0009550646645948291, 0.01819669082760811, 0.020110558718442917, 0.013426975347101688, -0.010458986274898052, 0.033911336213350296, 0.016432344913482666, 0.017987363040447235, 0.007909654639661312, 0.01813688315451145, 9.81814882834442e-05, -0.01668653078377247, 0.0417761355638504, 0.011737389490008354, -0.0012793380301445723, -0.041417285799980164, -0.034270185977220535, -0.02253279648721218, 0.024865323677659035, -0.012754131108522415, -0.02476065792143345, -0.04111824184656143, -0.007180740591138601, 0.01408486720174551, -0.019901229068636894, 0.0009728202712722123, 0.013785826042294502, -0.004777192138135433, 0.011789721436798573, 0.04605243355035782, -0.0028969671111553907, 0.008746971376240253, -0.006489206571131945, 0.028005260974168777, -0.0069676730781793594, 0.011101925745606422, -0.0031586287077516317, -0.034001048654317856, -0.013098029419779778, 0.001826023799367249, -0.022996312007308006, -0.02275707945227623, 0.009001157246530056, 0.00564441317692399, -0.026629669591784477, -0.001940033514983952, -0.0022297301329672337, -0.004238917026668787, 0.012679371051490307, 0.05110623687505722, -0.028722962364554405, 0.04213498532772064, -0.002861456014215946, -0.004784668330103159, -0.006309781223535538, 0.015505315735936165, 0.030188266187906265, -0.03800820931792259, 0.0038950189482420683, 0.008156363852322102, -0.04413856565952301, -0.031040536239743233, 0.04204527288675308, -0.04889332875609398, 0.032057277858257294, 0.006777034141123295, -0.014234388247132301, 0.031130248680710793, -0.0041417283937335014, -0.07541833072900772, -0.002796040615066886, 0.010728123597800732, 0.01898915320634842, 0.008111507631838322, 0.007038695737719536, 0.015864165499806404, 0.005536010954529047, -0.020065702497959137, -0.012073811143636703, -0.016865955665707588, -0.01027208473533392, 0.0022297301329672337, -0.014675474725663662, -0.004900547210127115, 0.002734363079071045, -0.03549625724554062, 0.002239075256511569, -0.021007684990763664, 0.006530324462801218, 0.03773907199501991, 0.004623933229595423, -0.0030782611574977636, 0.018121931701898575, -0.010010423138737679, -0.008903969079256058, 0.013434451073408127, -0.027167944237589836, 0.04874380677938461, -0.03513740748167038, 0.01934800297021866, 0.0348682701587677, -0.012731702998280525, 0.02900705114006996, -0.03800820931792259, 0.015953877940773964, 0.024476569145917892, 0.014907232485711575, -0.00019788154168054461, 0.014391385018825531, 0.020035797730088234, 0.029081812128424644, 0.03421038016676903, -0.021007684990763664, 0.07206906378269196, -0.00564441317692399, -0.028050117194652557, 0.01867515780031681, -0.005042592063546181, -0.006739654112607241, 0.006870484445244074, 0.010571126826107502, -0.009352531284093857, 0.012313044629991055, -0.008358217775821686, 0.018705062568187714, -0.014211960136890411, 0.0007761068409308791, -0.038755811750888824, -0.003913708962500095, -0.013569020666182041, -0.0031044273637235165, -0.02494008280336857, -0.0009517938597127795, 0.007685373537242413, 0.020574074238538742, 0.03154890611767769, 0.010481414385139942, 0.019587235525250435, -0.03012845851480961, -0.008881540969014168, -0.019811516627669334, -0.0014447455760091543, -0.006119142286479473, 0.008649783208966255, 0.02142634242773056, 0.0002199592418037355, -0.02825944684445858, 0.006560228765010834, 0.03217689320445061, -0.009404863230884075, -0.01332978717982769, 0.015669789165258408, -0.028184687718749046, -0.00528930127620697, 0.012402757070958614, 0.02674928680062294, 0.012268188409507275, 0.012582182884216309, 0.021680528298020363, 0.024013053625822067, 0.015430555678904057, -0.030891014263033867, 0.00557712884619832, 0.02480551414191723, 0.008844160474836826, -0.049491412937641144, -0.04530482739210129, 0.015849214047193527, 0.02861829660832882, -0.009165630675852299, 0.024342000484466553, -0.024611137807369232, -0.024371903389692307, -0.02404295839369297, -0.026465196162462234, -0.05762534961104393, -0.02142634242773056, 0.004433294292539358, 0.04650099575519562, -0.0037137249018996954, -0.005853742826730013, 0.025224173441529274, -0.008746971376240253, 0.004657575394958258, 0.04856438189744949, 0.0077302297577261925, 0.006220068782567978, 0.0022222541738301516, 0.0070573859848082066, 0.023878484964370728, -0.0038426867686212063, 0.014802567660808563, 0.020514264702796936, 0.058941133320331573, 0.04700936749577522, 0.004481888376176357, 0.009883330203592777, -0.005349109880626202, -0.0014512870693579316, 0.033253446221351624, 0.015774453058838844, -0.009883330203592777, 0.0463215708732605, 0.011984098702669144, -0.011520584113895893, 0.03033778816461563, 0.010234704241156578, 0.030233122408390045, -0.008918920531868935, -0.01348678395152092, -0.029395805671811104, -0.018346212804317474, 0.009531956166028976, 0.026853950694203377, 0.009389911778271198, 0.03779888153076172, 0.006201378535479307, 0.008537642657756805, -0.0050201634876430035, 0.004732335917651653, 0.017763081938028336, 0.004485626704990864, -0.021172156557440758, -0.018884487450122833, 0.0024839157704263926, -0.04057996720075607, -0.008246077224612236, -0.04240412265062332, 0.08415035158395767, -0.005775243975222111, 0.03412066772580147, 0.00712093198671937, -0.018884487450122833, -0.03878571838140488, 0.0027773503679782152, -0.015624932013452053, -0.03244603052735329, -0.006193902809172869, -0.010144991800189018, -0.029261237010359764, 0.029261237010359764, -0.005707959644496441, -0.026689477264881134, -0.02541854977607727, -0.00696393521502614, 0.004964093212038279, -0.005042592063546181, -0.05499378219246864, -0.027915548533201218, -0.022129090502858162, 0.04102852940559387, -0.03181804344058037, -0.03199746832251549, 0.0362139567732811, 0.03253574296832085, 0.0263904370367527, 0.01046646200120449, -0.026240915060043335, -0.030711589381098747, 0.0108925960958004, 0.003541775979101658, -0.004212751053273678, 0.0325656495988369, 0.018286405131220818, 0.0339711457490921, -0.03226660564541817, -0.024057909846305847, -0.011415919288992882, 0.07380350679159164, -0.01797240972518921, 0.023145832121372223, 0.025298934429883957, -0.0211123488843441, 0.006679845508188009, 0.025478359311819077, 0.020559120923280716, 0.018121931701898575, -0.014339053072035313, 0.001091502490453422, -0.005614509340375662, -0.0011718699242919683, 0.008664735592901707, 0.020065702497959137, 0.02169547975063324, -0.038486674427986145, -0.06357628107070923, 0.031788140535354614, 0.02244308404624462, -0.027362322434782982, -0.009270294569432735, -0.009075917303562164, 0.04458712786436081, 0.008829208090901375, 0.04330124706029892, 0.011303778737783432, -0.026554908603429794, -0.0036053224466741085, -0.019407810643315315, -0.014032535254955292, -0.0040034218691289425, 0.02604653872549534, -0.019407810643315315, 0.043989043682813644, 0.04321153461933136, -0.04569358378648758, 0.03003874607384205, 0.018839631229639053, 0.011550487950444221, 0.003190401941537857, -0.026315676048398018, 0.03217689320445061, -0.020080653950572014, 0.007595661096274853, -0.0106683149933815, -0.016178159043192863, -0.015759501606225967, 0.05149499326944351, 0.028977148234844208, -0.021994521841406822, 0.0077302297577261925, 0.015161417424678802, -0.02014046348631382, 0.006724701728671789, 0.030277978628873825, 0.01637253724038601, 0.039234280586242676, 0.012963460758328438, -0.02142634242773056, 0.014937136322259903, 0.037200797349214554, -0.002427845261991024, 0.021082444116473198, 0.0009027323685586452, -0.028379064053297043, 0.005913550965487957, 0.02072359435260296, 0.029470566660165787, -0.0348682701587677, -0.01078045554459095, -0.05451531335711479, -0.019572284072637558, 0.019527427852153778, 0.01836116425693035, 0.009830998256802559, -0.013105505146086216, -0.01614825613796711, 0.04703927040100098, 0.035286929458379745, -0.01033936906605959, -0.058133721351623535, 0.032715167850255966, 0.02350468374788761, 0.004384699743241072, -0.012896176427602768, -0.059599023312330246, -0.018794775009155273, -0.04360029101371765, 0.01761355996131897, -0.008313361555337906, -0.011034641414880753, -0.005079972092062235, 0.0031698427628725767, -0.015849214047193527, 0.01991618238389492, -0.04321153461933136, 0.0070798140950500965, 0.004377224016934633, 0.018600398674607277, -0.010324416682124138, -0.0321170873939991, -0.015819309279322624, 0.00016085176321212202, -0.015445507131516933, -0.016402440145611763, -0.017778033390641212, 0.03459913283586502, 0.018555542454123497, -0.011789721436798573, 0.011445824056863785, 0.004851952660828829, -0.025134461000561714, -0.024386856704950333, -0.08576518297195435, -0.018032219260931015, -0.0022633722983300686, -0.015370747074484825, -0.0019848898518830538, -0.008193744346499443, -0.0009616062161512673, -0.04413856565952301, 0.042703162878751755, -0.021979570388793945, 0.000837316969409585, -0.01659681834280491, -0.0032109608873724937, -0.013479308225214481, 0.0013989547733217478, 0.009412339888513088, 0.03681204095482826, 0.007700325455516577, -0.03415057063102722, 0.006597609259188175, 0.037858687341213226, 0.08062165975570679, 0.02896219491958618, 0.04919237270951271, -0.017404230311512947, 0.012088763527572155, 0.026584813371300697, -0.004429556429386139, 0.02027503214776516, -0.029485518112778664, 0.02458123303949833, -0.031339578330516815, -0.012791511602699757, -0.013359691016376019, -0.02861829660832882, -0.013583972118794918, 0.003674475708976388, 0.004773454274982214, -0.03782878443598747, 0.014047486707568169, -0.030367691069841385, -0.012058859691023827, 1.5696772607043386e-05, 0.04700936749577522, -0.013112981803715229, -0.010720647871494293, 0.001615760033018887, 0.012918604537844658, -0.004508054815232754, 0.014054963365197182, -0.0037230700254440308, -0.05484426021575928, -0.043361056596040726, -0.03130967170000076, -0.020020846277475357, -0.01587911881506443, 0.02289164811372757, 0.06064566969871521, -0.04751773551106453, -0.009173106402158737, -0.002711934968829155, 0.029246285557746887, 0.009046013467013836, 0.002061519306153059, -0.048773713409900665, -0.017060333862900734, 0.0017839710926637053, 0.02496998757123947, -0.012881224043667316, 0.03343287110328674, 0.048026107251644135, -0.04688974842429161, 0.026420339941978455, 0.008335789665579796, -0.008298409171402454, 0.013516687788069248, -0.008814255706965923, -0.017045380547642708, -0.030861111357808113, 0.006078023929148912, 0.032984308898448944, 0.01417457964271307, 0.04703927040100098, 0.01720985397696495, 0.05541244149208069, -0.025837209075689316, 0.020738545805215836, 0.03920437768101692, -0.01637253724038601, -0.04099862650036812, -0.007210644893348217, 0.03962303325533867, -0.032894596457481384, -0.020304936915636063, 0.009210486896336079, -0.011303778737783432, 0.028468776494264603, -0.039593130350112915, -0.003268900327384472, -0.006541538517922163, 0.03983236476778984, 0.05586100369691849, -0.025269029662013054, 0.01297841314226389, 0.00035114045022055507, -0.035645779222249985, -0.00650042062625289, -0.00689665088430047, -0.01373349316418171, -0.03364219889044762, 0.0003296468348708004, 0.05017920956015587, -0.008335789665579796, 0.0038239965215325356, -0.011146781966090202, 0.02523912489414215, -0.012372853234410286, 0.006014477461576462, -0.008567546494305134, -0.024282190948724747, 0.00488933315500617, 0.005229493137449026, 0.03738022223114967, 0.004268821328878403, -0.026465196162462234, 0.010563650168478489, -0.005898599047213793, 0.01335221529006958, -0.03304411470890045, 0.014099819585680962, -0.02568768709897995, 0.005745340138673782, 0.06554995477199554, 0.012185951694846153, -0.002125065540894866, 0.00672096386551857, 0.010750551708042622, 0.010862692259252071, -0.00980109442025423, -0.01650710590183735, 0.007072337903082371, -0.025224173441529274, 0.012073811143636703, -0.0001840975892264396, 0.05678803101181984, 0.013337262906134129, -0.012656942941248417, -0.014817520044744015, -0.008418025448918343, 0.00495661748573184, -0.007064861711114645, -0.0012774690985679626, 0.021142253652215004, -0.008171316236257553, -0.01093745231628418, -0.0029810727573931217, 0.006679845508188009, 0.019153624773025513, 0.01774812862277031, 0.006863008718937635, 0.0069228168576955795, -0.03334315866231918, -0.013561544008553028, -0.0009326364961452782, -0.0029193952213972807, -0.0026969830505549908, 0.03154890611767769, 0.01955733262002468, -0.03334315866231918, -0.025702640414237976, -0.002024139044806361, -0.03172833099961281, -0.009666524827480316, -0.0005191178061068058, -0.02498493902385235, -0.0012129881652072072, 0.00495661748573184, 0.02462608925998211, -0.0215609110891819, -0.004698693752288818, 0.041327573359012604, -0.024162575602531433, -0.001357836532406509, -0.007599398959428072, 0.006821890361607075, 0.00352682382799685, -0.0050463299266994, 0.009098345413804054, 0.02000589482486248, 0.022562701255083084, -0.022293563932180405, -0.04249383509159088, 0.03567568212747574, 0.0008929200121201575, -0.010615983046591282, 0.014010107144713402, 0.0021194585133343935, 0.01783784106373787, -0.025523215532302856, -0.02537369355559349, 0.008732019923627377, 0.02413267083466053, -0.04790649190545082, 0.008761923760175705, -0.002463356591761112, 0.012828892096877098, 0.018734967336058617, 0.0019961039070039988, -0.021142253652215004, -0.011655152775347233, -0.0065452768467366695, 0.002837158739566803, 0.00261100847274065, 0.012514897622168064, 0.004504316486418247, -0.013785826042294502, 0.01428672019392252, 0.010571126826107502, -0.024566281586885452, 0.02115720510482788, 0.007775085978209972, -0.012425185181200504, -0.006040643900632858, -0.011019689030945301, -0.031698428094387054, 0.0017316387966275215, 0.0005761226639151573, -0.013165313750505447, 0.0025325100868940353, 0.019422762095928192, 0.02790059708058834, -0.0026091395411640406, 0.0009345055441372097, -0.014697902835905552, -0.0005672448314726353, 0.02564283087849617, 0.008492786437273026, 0.014810043387115002, 0.03642328828573227, -0.01435400452464819, -0.06806190311908722, -0.00540144182741642, 0.002231599297374487, 0.03127976879477501, -0.021979570388793945, 0.026270819827914238, 0.0012849450577050447, 0.0022484203800559044, 0.03549625724554062, 0.024641042575240135, -0.02334021031856537, 0.024386856704950333, 0.034270185977220535, 0.015340843237936497, 0.00259044929407537, -0.011894386261701584, 0.035286929458379745, -0.011460775509476662, -0.017090236768126488, 0.04440770298242569, -0.015520268119871616, -0.026734333485364914, 0.011647677049040794, -0.009255343116819859, -0.02967989630997181, 0.004807096440345049, 0.0008251683902926743, -0.021531008183956146, -0.03648309409618378, 0.021755289286375046, 0.0026614717207849026, -0.028977148234844208, -0.027481937780976295, -0.02147119864821434, 0.015340843237936497, -0.010563650168478489, 0.0012849450577050447, 0.004668789450079203, -0.008672211319208145, -0.028633249923586845, -0.010907548479735851, -0.01880972646176815, 0.01673138700425625, 0.005435083992779255, 0.026360532268881798, -0.022816887125372887, 0.013613876886665821, 0.02093292400240898, 0.003450194373726845, 0.000996182905510068, -0.04093881696462631, -0.01381572987884283, 0.006687321700155735, 0.019362954422831535, -0.0043473197147250175, -0.01797240972518921, -0.007670421618968248, 0.001007396960631013, -0.037021372467279434, -0.027362322434782982, 0.01939285919070244, -0.028842579573392868, 0.02079835534095764, 0.020349793136119843, 0.009995470754802227, 0.016791194677352905, -0.0009690822334960103, -0.028065070509910583, 0.003029666841030121, -0.029889224097132683, 0.02334021031856537, -0.02320564165711403, 0.010264609009027481, 0.03561587631702423, 0.012664418667554855, -0.02431209571659565, 0.002758660353720188, -0.012537325732409954, -0.0017026690766215324, 0.005255659110844135, 0.017224805429577827, -0.00337169598788023, -0.007621827069669962, 0.021217012777924538, 0.016118351370096207, 0.017179949209094048, 0.027003472670912743, -0.03450942039489746, 0.012395281344652176, 0.004437032155692577, 0.013404547236859798, 0.008044223301112652, 0.023041168227791786, -0.012791511602699757, 0.040789298713207245, 0.02009560726583004, -0.0029380854684859514, 0.013120457530021667, 0.016118351370096207, 0.015490363352000713, 0.03060692548751831, -0.023235546424984932, 0.009247866459190845, 0.008724543265998363, -0.053618188947439194, 0.010458986274898052, 7.102241943357512e-05, 0.004362271632999182, -0.012200904078781605, -0.011782245710492134, 0.0047136456705629826, -0.0018933082465082407, -0.007244287058711052, 0.01619311235845089, -0.026495100930333138, 0.04733831062912941, 0.0190340094268322, 0.016028638929128647, -0.011759817600250244, -0.026764238253235817, 0.0034202903043478727, -0.01390544231981039, -0.02812487818300724, -0.009614192880690098, -0.013785826042294502, 0.018166787922382355, 0.001254106406122446, -0.03265536203980446, -0.019751708954572678, -0.006264925003051758, 0.001242892351001501, -0.012320521287620068, -0.016955668106675148, -0.010100135579705238, -0.0008499327814206481, -0.013471831567585468, 0.05062777176499367, -0.01783784106373787, -0.01073559932410717, -0.006078023929148912, 0.0016241705743595958, -0.029664942994713783, 0.006276139058172703, -0.030143409967422485, -2.5655097488197498e-05, -0.01831630803644657, -0.0012718620710074902, -0.002758660353720188, -0.0035100027453154325, -0.014204484410583973, 0.01424186397343874, 0.0008424567640759051, 0.008021795190870762, -0.018555542454123497, 0.008664735592901707, 0.02160576730966568, 0.006208854727447033, -0.022996312007308006, -0.00040557541069574654, -0.013030745089054108, 0.01801726594567299, -0.002431583357974887, 0.02767631597816944, -0.005494892597198486, 0.01578940451145172, -0.03265536203980446, 0.03361229598522186, -0.04733831062912941, -0.007427449803799391, 0.01955733262002468, 0.04967083781957626, -0.03074149414896965, 0.0204096008092165, 0.015550171956419945, 0.0012961591128259897, -0.017015475779771805, -0.011034641414880753, 0.025313885882496834, -0.01632768101990223, -0.020289983600378036, 0.0204096008092165, -0.00482578668743372, -0.008739495649933815, 0.011557964608073235, 0.01033936906605959, -0.013890489935874939, 0.006261187139898539, -0.017688320949673653, 0.02435695193707943, -0.0027754814364016056, -0.012447613291442394, -0.024700850248336792, -0.006646203342825174, -0.04377971589565277, -0.03286468982696533, -0.02244308404624462, 0.038127824664115906, -0.003562335157766938, 0.0036277505569159985, 0.029530374333262444, 0.03726060315966606, -0.04019121453166008, 0.03735031560063362, -0.00265586469322443, -0.007692849729210138, -0.012208379805088043, -0.008350741118192673, -0.053079914301633835, 0.005087448284029961, -0.011909338645637035, 0.03110034391283989, 0.010660839267075062, 0.00995809119194746, -0.022218802943825722, -0.0027175419963896275, -0.01430914830416441, -0.029934080317616463, 0.003300673561170697, 0.007782562170177698, 0.009322627447545528, 0.009434767998754978, -0.010197324678301811, 0.008963776752352715, 0.00332683976739645, -0.015565124340355396, -0.04635147377848625, -0.02200947515666485, -0.011610296554863453, 0.025971777737140656, 0.010787932202219963, 0.00528930127620697, 0.014159628190100193, -0.00010016730811912566, 0.029291141778230667, -0.009726333431899548, 0.0021231966093182564, -0.00041819122270680964, -0.008380645886063576, 0.011602820828557014, -0.0041193002834916115, -0.008859112858772278, 0.048594288527965546, -0.01346435584127903, 0.022861743345856667, -0.026016633957624435, 0.01113182958215475, -0.001022348995320499, 0.011737389490008354, 0.008836684748530388, 0.001386806252412498, -0.0008429239969700575, 0.010541222058236599, -0.003257686272263527, -0.018077075481414795, -0.001312980311922729, 0.04688974842429161, -0.0005583670572377741, -0.008589974604547024, 0.0208581630140543, 0.015624932013452053, -0.004582815337926149, -0.0004310406802687794, -0.00131671829149127, 0.0066611552610993385, 0.03145919367671013, 0.027646411210298538, -0.03603453189134598, 0.009023585356771946, 0.012477518059313297, 0.007236810866743326, -0.014324100688099861, 0.006892913021147251, -0.007685373537242413, 0.0017391147557646036, 0.008298409171402454, -0.01637253724038601, -0.025313885882496834, -0.0024913917295634747, 0.031219961121678352, 0.004687479697167873, -0.007049909792840481, 0.01671643555164337, -0.0195124763995409, 0.006937769241631031, 0.0011709354585036635, -0.003857638919726014, 0.0026838998310267925, -0.0007733033271506429, 0.013112981803715229, -0.003625881392508745, 0.013613876886665821, 0.012649467214941978, -0.01827145181596279, -0.011655152775347233, 0.002614746568724513, 0.021979570388793945, 0.015011896379292011, 0.0013251288328319788, -0.01761355996131897, -0.004971569404006004, -0.004410866182297468, 0.032984308898448944, 0.017135092988610268, -0.016880907118320465, -0.006392017938196659, 0.018286405131220818, -0.03660271316766739, 0.015505315735936165, -0.003040880896151066, 0.008978729136288166, 0.023400017991662025, 0.00010110181756317616, -0.01955733262002468, 0.013165313750505447, 0.023100975900888443, 0.01317279040813446, -0.02577740140259266, -0.023145832121372223, 0.018256500363349915, -0.0002562881272751838, -0.01840602047741413, -0.009397387504577637, -0.002003579866141081, -0.0103169409558177, -0.014727807603776455, 0.02564283087849617, 0.026270819827914238, 0.012455089949071407, -0.010615983046591282, -0.005625723395496607, -0.0043921759352087975, -0.015490363352000713, -0.002220385242253542, 0.0029904176481068134, 0.01894429698586464, 0.021351581439375877, -0.0071844784542918205, 0.005147256422787905, -0.04135747626423836, -0.006182688754051924, 0.003246472217142582, 0.022607557475566864, 0.017778033390641212, 0.004220226779580116, 0.008193744346499443, -0.007042433600872755, -0.004044539760798216, 0.027930501848459244, 0.043450769037008286, 0.011557964608073235, -0.00480335857719183, -0.02812487818300724, -0.030861111357808113, -0.010331893339753151, 0.05388732627034187, -0.03110034391283989, -0.01734442263841629, -0.018525637686252594, -0.024162575602531433, -0.0035436449106782675, -0.03594481945037842, 0.024551330134272575, 0.0013914786977693439, -0.001958723645657301, -0.01374844554811716, 0.04617204889655113, -0.0143315764144063, -0.019422762095928192, 0.008687163703143597, 0.011752341873943806, 0.017987363040447235, 0.016791194677352905, -0.0009877723641693592, 0.013681161217391491, -0.013202694244682789, -0.002128803636878729, 0.002762398449704051, -0.014428765513002872, 0.04706917330622673, 0.08678191900253296, 0.009113297797739506, -0.026868902146816254, -0.01087016798555851, 0.008612402714788914, -0.0013008316745981574, 0.005812624469399452, -0.00969642959535122, 0.02501484379172325, -0.011146781966090202, -0.00250073685310781, 0.005035115871578455, 0.010040326975286007, -0.0023026217240840197, -0.024237334728240967, 0.011946719139814377, -0.025433503091335297, 0.02523912489414215, -0.007408760022372007, -0.008604926988482475, -0.017404230311512947, -0.021276822313666344, 0.016836050897836685, 0.0001718322018859908, 0.03474865481257439, -0.022906599566340446, -0.013845633715391159, 0.018615350127220154, 0.002936216304078698, -0.01752384752035141, 0.020574074238538742, 0.0023661679588258266, -0.02160576730966568, -0.008418025448918343, -0.003648309502750635, 0.008126460015773773, -0.02027503214776516, 0.003470753552392125, 0.00652284873649478, 0.03495798259973526, 0.011333683505654335, -0.007595661096274853, 0.01682109944522381, 0.011386015452444553, 0.019796565175056458, -0.0048930710181593895, -0.004885594826191664, -0.006160260643810034, 0.011206590570509434, -0.009016109630465508, -0.014832471497356892, 0.00341655220836401, 0.0027530533261597157, 0.016118351370096207, 0.01038422528654337, -0.0030221908818930387, 0.002508212812244892, -0.007599398959428072, -0.013853110373020172, -0.005210802890360355, -0.007049909792840481, -0.02338506653904915, -0.022039378061890602, -0.02151605486869812, -0.05690765008330345, -0.0014989469200372696, 0.013299882411956787, 0.010593554936349392, -0.007700325455516577, -0.035197217017412186, 0.004855690523982048, -0.024237334728240967, 0.010369272902607918, -0.04072948917746544, -0.011333683505654335, -0.009031061083078384, -0.007562018930912018, 0.027167944237589836, 0.016447298228740692, -0.004537958651781082, -0.0008784352103248239, -0.003719331929460168, -0.00564441317692399, -0.03764935955405235, 0.008545118384063244, 0.00024320506781805307, 0.01337464340031147, 0.016447298228740692, 0.019318098202347755, -0.016925763338804245, 0.03633357584476471, -0.01818173937499523, 0.02444666437804699, -0.002870800904929638, 0.011954194866120815, -0.000732185086235404, -0.004037064034491777, 0.0013027007225900888, 0.01916857808828354, -0.006291091442108154, 0.008664735592901707, -0.0024727017153054476, 0.01770327240228653, 0.002169921761378646, 0.01831630803644657, -0.014279244467616081, -0.025807304307818413, -0.025358742102980614, -0.0013260632986202836, -0.016791194677352905, 0.01863030157983303, -0.014511002227663994, 0.019377905875444412, 0.019661996513605118, -0.05257154256105423, 0.0026726857759058475, -0.005584605038166046, 0.016581866890192032, -0.014563334174454212, 0.025044748559594154, -0.0208880677819252, -0.0057602920569479465, -0.018884487450122833, 0.01020480040460825, 0.02325049787759781, 0.0025026057846844196, -0.006335947662591934, -0.005950930994004011, 0.022323468700051308, -0.011602820828557014, -0.0031418076250702143, 0.02440180815756321, -0.012514897622168064, 0.008006843738257885, 0.02351963520050049, 0.0002076938544632867, -0.028543537482619286, 0.012230808846652508, -0.002072733361274004, 0.001530720037408173, 0.0006560228648595512, -0.040609873831272125, -0.02896219491958618, -0.0051547326147556305, -0.006960197351872921, -0.008380645886063576, -0.0005079037509858608, 0.04769716411828995, -0.015565124340355396, 0.012342949397861958, -0.012230808846652508, 0.00650042062625289, 0.00707607576623559, 0.010346844792366028, -0.021904809400439262, -0.02293650433421135, -0.0011167341144755483, 0.0023231806699186563, 0.014697902835905552, 0.019243337213993073, 0.009360007010400295, 0.01300084125250578, -0.004493102431297302, -0.013015792705118656, -0.01831630803644657, 0.018794775009155273, -0.007539590820670128, 0.025538166984915733, 0.00789470225572586, 0.02191976085305214, 0.014361481182277203, 0.003984731622040272, 0.003814651630818844, -0.016746338456869125, 0.033821623772382736, 0.019108768552541733, -0.014967040158808231, 0.008133935742080212, 0.037858687341213226, 0.009973042644560337, -0.009830998256802559, -0.0035885011311620474, -0.009031061083078384, -0.012208379805088043, -0.024700850248336792, -0.0029380854684859514, -0.0001191494520753622, 0.009733809158205986, -0.04494597762823105, -0.0137559212744236, -0.02363925240933895, 0.017269661650061607, -0.004874380771070719, -0.00020372220024000853, 0.016387488692998886, 0.030457403510808945, 0.008432977832853794, 0.011984098702669144, -0.03588501363992691, -0.026136251166462898, 0.038038112223148346, 0.0005980835412628949, -0.011064545251429081, -0.005722912028431892, 0.0009228241979144514, -0.0106683149933815, -0.006948983296751976, -0.009367483668029308, -0.0014914708444848657, -0.01788269728422165, 0.0020297460723668337, -0.0065677049569785595, -0.005483678542077541, -0.02857344038784504, 0.0017699535237625241, 0.029156573116779327, 0.017464039847254753, -0.024611137807369232, 0.011632724665105343, -0.011498156003654003, -0.002526903059333563, 0.022024426609277725, 0.0034913127310574055, 0.0012129881652072072, -0.012582182884216309, -0.010548698715865612, -0.0006359310355037451, 0.01694071665406227, 0.006997577380388975, 0.02812487818300724, -0.00357168004848063, 0.0005518255056813359, -0.004646361339837313, -0.028722962364554405, -0.021575864404439926, -0.03223670274019241, -0.013337262906134129, 0.01824154704809189, -0.01761355996131897, 0.0032053538598120213, 0.009419815614819527, 0.02665957435965538, 0.014578286558389664, 0.01680614799261093, -0.01978161372244358, 0.06465283036231995, 0.018869535997509956, 0.0030221908818930387, -0.002906312234699726, -0.003655785694718361, -0.0064219217747449875, 0.003450194373726845, -0.024551330134272575, -0.007304095197468996, 0.0002259167085867375, -0.024297144263982773, -0.0008480637916363776, 0.006709749810397625, 0.019004104658961296, 0.021396439522504807, 0.01039917767047882, 0.015624932013452053, 0.0034464565105736256, 0.005057543981820345, -0.008447930216789246, -0.0017465908313170075, -0.0029642514418810606, -0.009531956166028976, 0.011498156003654003, 0.0008349806885235012, -0.008627355098724365, -0.02311592921614647, -0.004762240219861269, 0.004919236991554499, -0.022966407239437103, 0.0066499412059783936, -0.00012090165546396747, -0.006182688754051924, 0.0010765503393486142, -0.005636937450617552, 0.0222636591643095, -0.01610339991748333, 0.011229018680751324, -0.009397387504577637, 0.02923133224248886, 0.007001315243542194, -0.003508133813738823, 0.016746338456869125, 0.004111824557185173, 0.011229018680751324, -0.005326681304723024, 0.006997577380388975, 0.011677580885589123, -0.029739703983068466, -0.007576970849186182, 0.0030035008676350117, -0.0138830142095685, 0.035017792135477066, 0.006549014709889889, -0.014959564432501793, 0.027795933187007904, 0.02656986191868782, -0.010952404700219631, 0.0051435185596346855, 0.003072654129937291, 0.02692871168255806, -0.0054388223215937614, 2.712986315600574e-05, -0.0334627740085125, 0.013957774266600609, -0.003984731622040272, 0.009950614534318447, -0.028334207832813263, 0.005513582844287157, -0.02941075712442398, -0.005464988294988871, -0.00011599549907259643, -0.0067060114815831184, -0.006231282837688923, 0.006993839517235756, -0.008507737889885902, -0.007984415628015995, -0.006489206571131945, -0.00015419341798406094, -0.0017895781202241778, 0.02550826221704483, -0.0018942427122965455, -0.009591764770448208, 0.03247593715786934, -0.008612402714788914, 0.02643529325723648, 0.034270185977220535, 0.009502052329480648, -0.01060103066265583, -0.01381572987884283, -0.022129090502858162, 0.020260080695152283, -0.0023325257934629917, -0.01359892450273037, -0.01364378072321415, 0.008963776752352715, -0.0143315764144063, -0.011214066296815872, 0.006007001735270023, 0.013367166742682457, 0.0038950189482420683, 0.023803723976016045, -0.008769399486482143, -0.009442243725061417, 0.005506106652319431, -0.009636620990931988, -0.017718225717544556, -0.014974516816437244, -0.04461703076958656, -0.012552278116345406, 0.0072181206196546555, 0.000553227262571454, 0.009509528055787086, -0.007610613014549017, -0.006889174692332745, -0.010197324678301811, 0.01797240972518921, -0.006085500121116638, 0.008186268620193005, 0.015505315735936165, -0.0019269504118710756, 0.016910811886191368, -0.008051699958741665, -0.015550171956419945, -0.024910179898142815, 0.002579235238954425, 0.031877852976322174, -0.023714011535048485, 0.01430914830416441, -0.02816973440349102, 0.03432999551296234, -0.016387488692998886, -0.01752384752035141, 0.015183845534920692, 0.0032894595060497522, -0.008881540969014168, 0.009576812386512756, 0.007804990280419588, 0.022727174684405327, 0.006780772004276514, -0.0021848739124834538, 0.02426723949611187, 0.026360532268881798, 0.027885645627975464, 0.013068125583231449, 0.025478359311819077, -0.0190340094268322, 0.022906599566340446, 0.002083947416394949, -0.022024426609277725, -0.008657258935272694, -0.012268188409507275, -0.008418025448918343, 0.017269661650061607, 0.01623796857893467, 0.0013606400461867452, -0.017404230311512947, -0.006137832533568144, 0.01373349316418171, -0.026285771280527115, -0.00047005628584884107, -0.029171524569392204, 1.0739512617874425e-05, -0.02090301923453808, 0.006029429845511913, -0.0015073574613779783, -0.0052930391393601894, 0.015938926488161087, 0.014839948154985905, 0.019362954422831535, 0.030771398916840553, -0.005681793671101332, -0.02404295839369297, 0.015161417424678802, -0.011789721436798573, -0.012522374279797077, -0.006119142286479473, 0.004190322943031788, 0.00022638397058472037, 0.006406969856470823, 0.018391069024801254, 0.024566281586885452, 0.020484361797571182, 0.008806779980659485, 0.015669789165258408, -0.021800145506858826, 0.010705695487558842, -0.00694150710478425, -0.03334315866231918, -0.015580075792968273, 0.018959248438477516, 0.008874064311385155, 0.027033375576138496, 0.005935979075729847, 0.020738545805215836, 0.00016085176321212202, -0.007760134059935808, 0.0005167815834283829, 0.019273241981863976, -0.028977148234844208, -0.017329471185803413, -0.0335225835442543, -0.01761355996131897, 0.006504158489406109, -0.010900072753429413, -0.02381867729127407, -0.0037679262459278107, 0.01452595368027687, -0.000615839147940278, 0.016312727704644203, -0.009920710697770119, -0.011752341873943806, -0.03654290363192558, -0.01027208473533392, 0.008462881669402122, -0.009143202565610409, -0.008231124840676785, -0.04694955796003342, -0.019946085289120674, 0.01779298484325409, -0.006141570396721363, 0.015714645385742188, -0.021815096959471703, -0.0002380652877036482, 0.023355161771178246, -0.001420448417775333, 0.011939242482185364, -0.00014157759142108262, 0.011094450019299984, 0.005790196359157562, 0.004186584614217281, -0.024147622287273407, 0.009928186424076557, 0.0071134562604129314, 0.008104031905531883, -0.011019689030945301, 0.015475411899387836, 0.00022077692847233266, 0.0010410391259938478, 0.0024801776744425297, -0.01867515780031681, 0.012470041401684284, -0.008365693502128124, 0.015041801147162914, -0.004952879156917334, 0.0032389962580055, -0.004706169944256544, -0.03238622471690178, 0.028872482478618622, 0.02945561520755291, -0.001125144655816257, -0.015191322192549706, -0.02595682628452778, -0.014877327717840672, 0.005218279082328081, -0.0013036351883783937, 0.009898282587528229, 0.01036179717630148, 0.008186268620193005, -0.017449086531996727, 0.00493418937548995, 0.006791986059397459, -0.008993681520223618, -0.0003999683540314436, 0.005472464486956596, 0.015266082249581814, -0.03630366921424866, -0.006003263406455517, -0.008492786437273026, 0.01930314674973488, -0.01810697838664055, 0.022233756259083748, -0.007939559407532215, 0.009427291341125965, -0.0028091236017644405, -0.007917131297290325, -0.008148888126015663, -0.035017792135477066, -0.009195534512400627, 0.00047799956519156694, 0.016447298228740692, 0.01720985397696495, 0.008261028677225113, -0.012096239253878593, 0.008051699958741665, -0.00967400148510933, 0.027646411210298538, 0.0031006892677396536, -0.004564125090837479, -0.0144586693495512, -0.00047005628584884107, -0.00010186110012000427, 0.010638411156833172, -0.0006877960986457765, -0.001091502490453422, -0.017718225717544556, 0.005565914791077375, 0.01916857808828354, 0.010877644643187523, 0.012073811143636703, -0.013471831567585468, 0.008238600566983223, 0.021889857947826385, -0.006552752573043108, 0.030098553746938705, -0.005558439064770937, -0.0019718066323548555, 0.0055322726257145405, 0.007397545967251062, 0.00980109442025423, 0.002424107398837805, 0.006638727150857449, -0.011161734350025654, 0.02187490463256836, -0.004612719174474478, -0.02395324595272541, -0.009659049101173878, 0.018839631229639053, -0.008418025448918343, 0.01824154704809189, -0.010944928973913193, -0.009479624219238758, -0.009232915006577969, -0.018570493906736374, -0.010391701944172382, -0.005696745589375496, -0.009808570146560669, -0.029485518112778664, 0.017583655193448067, 0.009173106402158737, -0.0003450661606620997, -0.014451193623244762, 0.008036747574806213, -0.0019344264874234796, 0.012552278116345406, 0.0011578523553907871, 0.023056119680404663, -0.013344738632440567, -0.004496840760111809, -0.0030334049370139837, -0.007790037896484137, -0.01042908150702715, -0.0007798448787070811, -9.25744534470141e-05, 0.006220068782567978, -0.006040643900632858, 0.01038422528654337, 0.0008863785187713802, 0.012470041401684284, -0.01082531176507473, -0.014839948154985905, 0.01087016798555851, 0.027885645627975464, 0.004227702971547842, 0.006384541746228933, -0.01037674956023693, -0.018600398674607277, 0.009165630675852299, 0.005528534762561321, -0.024416759610176086, 0.02462608925998211, -0.015430555678904057, 0.013120457530021667, 0.0032969354651868343, -0.008320837281644344, 0.004751026164740324, -0.015669789165258408, 0.017493942752480507, 0.02072359435260296, -0.0006835907697677612, 0.011483203619718552, 0.003666999749839306, 0.0029418233316391706, -0.007132146041840315, 0.01000294741243124, 0.009083393961191177, 0.0014839947689324617, -0.0011793459998443723, 0.003999683540314436, 0.011842054314911366, -0.012350425124168396, 0.021276822313666344, 0.004523006733506918, -0.021261868998408318, -0.009830998256802559, 0.006489206571131945, -0.007027481682598591, -0.0011737389722838998, -0.006990101188421249, -0.0038950189482420683, -0.019811516627669334, 0.00043524595093913376, -0.02054416947066784, -0.010698219761252403, 0.024611137807369232, -0.020424552261829376, 0.0017456563655287027, 0.016955668106675148, 0.008761923760175705, 0.005932241212576628, 0.008784351870417595, -0.005207065027207136, -0.018525637686252594, -0.01300084125250578, 0.014907232485711575, -0.007917131297290325, 0.024371903389692307, -0.005906074773520231, -0.013808254152536392, -0.00727419089525938, 0.008657258935272694, 0.001125144655816257, 0.010817836038768291, -0.016432344913482666, 0.005928502883762121, 0.0003359547408763319, 0.021531008183956146, 0.0019904968794435263, 0.0006905996124260128, 0.011273874901235104, -0.02187490463256836, 0.0036576546262949705, 0.004227702971547842, -0.0023026217240840197, -0.003287590341642499, 0.020245127379894257, 0.00537901371717453, 0.013965250924229622, 0.015669789165258408, -0.0042762975208461285, 0.0034987886901944876, 0.008657258935272694, 0.02564283087849617, -0.02568768709897995, 0.02528398111462593, -0.009875854477286339, -0.025747496634721756, -0.020125510171055794, 0.00016575791232753545, -0.003640833543613553, -0.004478150513023138, -0.019796565175056458, -0.00683310441672802, -0.004474412649869919, -0.002151231747120619, 9.794785728445277e-05, -0.0073751178570091724, 0.016611769795417786, 0.01355406828224659, 0.007662945427000523, 0.0013363428879529238, 0.019901229068636894, -0.01894429698586464, -0.005603295285254717, -0.024506473913788795, -0.01614825613796711, 0.005050067789852619, -0.00965157337486744, -0.005307991523295641, -0.001958723645657301, -0.006668631453067064, 0.040819201618433, 0.003156759776175022, 0.02200947515666485, 0.000938710814807564, 0.0026054014451801777, -0.009457196108996868, 0.0036277505569159985, 0.012171000242233276, -0.018480781465768814, -0.01824154704809189, -0.005435083992779255, 0.006691059563308954, 0.01383815798908472, -0.007442402187734842, -0.00519958883523941, 0.008036747574806213, -0.006889174692332745, -0.005094924010336399, -0.006496682297438383, 0.0024110241793096066, 0.007416235748678446, 0.0023998101241886616, -0.0020166628528386354, 0.008500262163579464, 0.02839401550590992, 0.019273241981863976, -0.027691267430782318, -0.0036819519009441137, -0.007976938970386982, 0.006691059563308954, -0.007513424381613731, -0.007405021693557501, 0.01366620883345604, 0.0018493864918127656, 0.012896176427602768, -0.010817836038768291, 0.006642465479671955, 0.007647993043065071, 0.01637253724038601, -0.004388438072055578, 0.011191638186573982, -0.0030464879237115383, 0.0030483570881187916, 0.004818310495465994, -0.011371063068509102, 0.003741760039702058, 0.0013419499155133963, -0.004526744596660137, -0.006358375772833824, 0.004052015952765942, 0.005352847743779421, 0.019766660407185555, -0.001700800028629601, 0.018794775009155273, 0.009748761542141438, 0.014944612048566341, -0.00789470225572586, -0.001645664218813181, 0.0068181524984538555, 0.004201536998152733, -0.016836050897836685, -0.002328787697479129, 0.0038127824664115906, 0.005169684533029795, 0.005696745589375496, 0.0023586919996887445, -0.017508896067738533, 0.01371106505393982, 0.001132620731368661, -0.006735915783792734, 0.0011643938487395644, -0.008231124840676785, -0.008791827596724033, 0.014787615276873112, 0.012305568903684616, 0.01286627259105444, -0.005932241212576628, 0.01854058913886547, -0.011670105159282684, 0.002685768995434046, -0.02688385546207428, 0.012260712683200836, 0.003709987038746476, 0.006859270390123129, -0.022203851491212845, -0.01569969207048416, -0.011976622976362705, -0.006762082222849131, -0.03119005635380745, 0.007464830297976732, 0.0002101469290209934, 0.029380854219198227, -0.008941348642110825, 0.008201220072805882, -0.01064588688313961, 0.007064861711114645, 0.00546872615814209, -0.0037062489427626133, 0.004209012724459171, -0.024252288043498993, 0.0016989310970529914, -0.002738101175054908, -0.01093745231628418, -0.008223649114370346, -0.0068779606372118, 0.0043697478249669075, -3.235725307604298e-05, 0.0025474620051681995, 0.014413813129067421, 0.013412022963166237, 0.00473607424646616, -0.015445507131516933, 0.03385152667760849, -0.0031418076250702143, 0.007618089206516743, 0.0016512712463736534, 0.014884804375469685, 0.005386489909142256, 0.01854058913886547, 0.015475411899387836, 0.002394203096628189, 0.005360323935747147, -0.010130040347576141, 0.01810697838664055, 0.02532883733510971, -0.0060668098740279675, 0.006646203342825174, -0.012230808846652508, -0.017015475779771805, 0.03074149414896965, -0.011602820828557014, 0.021127300336956978, 0.017269661650061607, -0.005921027157455683, -0.03145919367671013, 0.011610296554863453, -0.000367961562005803, 0.020035797730088234, -0.024431712925434113, -0.01863030157983303, -0.0006905996124260128, 0.010309465229511261, 0.001645664218813181, 0.011752341873943806, -0.011393491178750992, 0.008724543265998363, 0.014802567660808563, 0.004037064034491777, -0.022607557475566864, 0.03465894237160683, 0.02431209571659565, 0.01770327240228653, -0.017179949209094048, -0.021306725218892097, 0.01667157933115959, 0.02307107299566269, -0.019422762095928192, -0.0055322726257145405, 0.008694639429450035, -0.01060103066265583, 0.02230851538479328, 0.007782562170177698, -0.01765841618180275, 0.004594029393047094, -0.007864798419177532, -0.022024426609277725, -0.005906074773520231, -0.014600714668631554, -0.012544802390038967, -0.003009107895195484, 0.0019475094741210341, 0.007382593583315611, 0.011595344170928001, -0.006930293049663305, -0.011528059840202332, -0.019901229068636894, 0.0065564909018576145, 0.009053489193320274, 0.0066536795347929, 0.01417457964271307, -0.003633357584476471, -0.008425502106547356, -0.007453616242855787, 0.009158154018223286, -0.02124691754579544, 0.007401283830404282, 0.0016494023147970438, -0.008530166931450367, 0.028782770037651062, 0.010952404700219631, 0.0023231806699186563, -0.01379330176860094, 0.007128408178687096, -0.009763713926076889, 0.00019718066323548555, -0.0098683787509799, -0.004739812109619379, 0.009479624219238758, 0.005472464486956596, -0.03319363668560982, 0.012245760299265385, 0.01827145181596279, 0.006197640672326088, 0.022472988814115524, 0.004575339145958424, 0.038845524191856384, 0.014824995771050453, -0.0009134791325777769, -0.005150994285941124, -0.008118984289467335, -0.008672211319208145, -0.00047192530473694205, 0.006141570396721363, 0.015609980560839176, -0.0017344423104077578, -0.024057909846305847, 0.0017830365104600787, -0.029425710439682007, 0.007947035133838654, -0.009950614534318447, 0.018555542454123497, 0.02426723949611187, -0.0036688686814159155, -0.016970619559288025, 0.03660271316766739, -0.02072359435260296, -0.03220679983496666, 0.03110034391283989, 0.011647677049040794, 0.0018933082465082407, -0.0021418866235762835, -0.011797198094427586, -0.00953943282365799, 0.013367166742682457, -0.018256500363349915, 0.018824679777026176, -0.010907548479735851, 0.017553752288222313, 0.00944972038269043, 0.007939559407532215, -0.001106454525142908, -0.028872482478618622, -0.011842054314911366, -0.0024801776744425297, 0.01113182958215475, -0.03833715617656708, -0.005898599047213793, -0.0009405798045918345, -0.006679845508188009, 0.014653046615421772, 0.022084234282374382, -0.0040146359242498875, 0.01637253724038601, 0.0362139567732811, 0.00018888693011831492, 0.002682030899450183, -0.008231124840676785, 0.008485309779644012, -0.0017615429824218154, 0.0021456247195601463, 0.008425502106547356, -0.0031268554739654064, -0.026779189705848694, -0.021994521841406822, 0.002556807128712535, 0.014675474725663662, 0.0100926598533988, 0.012843843549489975, -0.014944612048566341, -0.0016185635467991233, 0.006492944434285164, 0.018331261351704597, -0.02307107299566269, 0.0005593015812337399, -0.007505948189646006, 0.005550962872803211, 0.011161734350025654, 0.005162208341062069, -0.00993566308170557, -0.0019568544812500477, 0.013105505146086216, 0.007517162710428238, -0.006354637444019318, 0.031698428094387054, -0.007554542738944292, -0.001799857709556818, 0.002659602789208293, -0.0011073891073465347, -0.0033997311256825924, 0.012193428352475166, -0.009771189652383327, 0.011101925745606422, 0.008537642657756805, 0.023579442873597145, 0.0019344264874234796, 0.009345055557787418, 0.009726333431899548, 0.01578940451145172, -0.011849530041217804, 0.0020540431141853333, 0.026943663135170937, 0.013606400229036808, 0.014249340631067753, -8.947890455601737e-05, 0.026584813371300697, -0.011400967836380005, 0.004306201357394457, -0.007677897345274687, 0.0042875115759670734, 0.012791511602699757, 0.017150046303868294, 0.020334839820861816, 0.007984415628015995, 0.02081330679357052, -0.019721804186701775, 0.004037064034491777, -0.0032707692589610815, -0.014675474725663662, -0.004309939686208963, 0.024999892339110374, 0.008223649114370346, 0.03752974048256874, -0.005588342901319265, -0.022114139050245285, -0.039054855704307556, -0.000485008378745988, -0.021336629986763, 0.010077707469463348, -0.002091423375532031, 0.004250131081789732, -0.003541775979101658, -0.0024147622752934694, 0.004519268870353699, -0.01286627259105444, 0.016267871484160423, 0.01960218884050846, 0.019377905875444412, 0.0017279007006436586, -0.017628511413931847, 0.029575230553746223, 0.014795091934502125, 0.024521425366401672, -0.019497523084282875, 0.02852858416736126, 0.011871958151459694, 0.005883646663278341, 0.021231966093182564, 0.015266082249581814, -0.0030165838543325663, -0.012185951694846153, 0.0009158154134638608, 0.0013896097661927342, -0.0025437241420149803, -0.003891281085088849, 0.012641990557312965, -0.025702640414237976, -0.0024035482201725245, -0.018914392217993736, -0.017359374091029167, 0.0026465195696800947, -0.01716499775648117, 0.011161734350025654, -0.018435925245285034, 0.007502210326492786, 0.015909021720290184, 0.005479940213263035, -0.018391069024801254, -0.0011942980345338583, 0.020947875455021858, 0.004403389990329742, 0.02758660353720188, -0.012821415439248085, -0.01417457964271307, -0.008335789665579796, 0.028947243466973305, 0.004052015952765942, -0.002057781210169196, 0.003762319218367338, 0.03130967170000076, 0.01734442263841629, -0.00015874912787694484, 0.009232915006577969, 0.01885458268225193, -0.009487099945545197, -0.01018237229436636, -0.013038220815360546, 0.007834894582629204, -0.015400650911033154, 0.010653362609446049, 0.01569969207048416, -0.007752657867968082, -0.002704459009692073, -0.002670816844329238, 0.011842054314911366, 0.021486151963472366, 0.0006125684012658894, 0.006691059563308954, -0.01053374633193016, -0.008104031905531883, 0.008963776752352715, 0.011214066296815872, -0.002201694995164871, 0.013681161217391491, 0.022428132593631744, -0.007636778987944126, -0.008986204862594604, 0.005494892597198486, 0.020693689584732056, 0.013868061825633049, 0.005808886606246233, -0.012499946169555187, -0.005016425624489784, 0.023564491420984268, 0.022024426609277725, -0.038396961987018585, 0.018705062568187714, -0.011012213304638863, 0.01377087365835905, -0.020065702497959137, -0.004478150513023138, 0.021411390975117683, -0.01734442263841629, 0.010974832810461521, -0.0015363270649686456, 0.0004896809114143252, 0.0014176449039950967, -0.014428765513002872, -0.0036613927222788334, -0.009240390732884407, -0.001327932346612215, 0.028468776494264603, -0.0023269187659025192, 0.0095170047134161, -0.0011036510113626719, -0.0028427657671272755, -0.015101609751582146, -0.011722437106072903, 0.017224805429577827, 0.021725384518504143, -0.004773454274982214, 0.038217537105083466, -0.009031061083078384, -0.009135725907981396, 0.004156680777668953, 0.0017951851477846503, 0.012245760299265385, -0.003595977323129773, -0.0065340627916157246, -0.0026801619678735733, -0.002121327444911003, -0.006025691516697407, 0.00020629209757316858, -0.0036613927222788334, 0.008732019923627377, -0.0029492995236068964, -0.00687422277405858, -0.03283478692173958, 0.009748761542141438, 0.006126618478447199, 0.006952721159905195, -0.0009307675063610077, 0.014496049843728542, -0.0031343314331024885, -0.0013148492434993386, -0.011789721436798573, 0.022592606022953987, -0.006373327691107988, 0.01858544535934925, 0.002652126597240567, 0.01637253724038601, -0.009277771227061749, -0.013404547236859798, 0.034539323300123215, -0.0014213828835636377, 0.01098230853676796, -0.009120773524045944, -0.0003782411222346127, 0.012477518059313297, -0.02005075104534626, 0.010608506388962269, 0.014182056300342083, -0.002530640922486782, -0.024476569145917892, 0.015909021720290184, 0.002351215807721019, 0.007864798419177532, -0.021590815857052803, 0.006029429845511913, -0.016357583925127983, 0.014623142778873444, -0.0007737705600447953, -0.0033623508643358946, -0.012275665067136288, 0.011946719139814377, 0.0071134562604129314, -0.019318098202347755, 0.011744865216314793, -0.007270453032106161, 0.02284679189324379, 0.0050201634876430035, -0.014750235714018345, 0.009053489193320274, -0.0010214145295321941, -0.00696393521502614, -0.012499946169555187, -0.0053939661011099815, 0.0029380854684859514, 0.0025138198398053646, 0.0068181524984538555, -0.01698557287454605, 0.003719331929460168, -0.0006064940826036036, 0.020349793136119843, 0.0321170873939991, 0.0015251130098477006, -0.017150046303868294, 0.02076845057308674, -0.008126460015773773, -0.019138673320412636, 0.00953943282365799, 0.008687163703143597, -0.006220068782567978, 0.005891122855246067, -0.021725384518504143, -0.012103715911507607, -0.004833262413740158, -0.0133073590695858, 0.013740968890488148, 0.011094450019299984, 0.004990259651094675, -0.0052818250842392445, -0.010615983046591282, -0.00784984603524208, -0.007917131297290325, -0.005408918019384146, 0.016836050897836685, 0.0019475094741210341, -0.023325258865952492, -0.0068069384433329105, 0.0037978305481374264, -0.007487258408218622, -7.563653980469098e-06, 0.02498493902385235, 0.004478150513023138, -0.008717067539691925, 0.009262818843126297, 0.019946085289120674, 0.007670421618968248, 0.0095170047134161, -0.00669853575527668, 0.0036034532822668552, 0.0014082997804507613, 0.003973517566919327, -0.0069116028025746346, -0.021351581439375877, 0.0054462980479002, 0.002549331169575453, -0.009060965850949287, -0.007135884370654821, -0.023265449330210686, -0.007262976840138435, -0.046381376683712006, 0.002969858469441533, -0.011700008995831013, -0.02102263644337654, -0.018884487450122833, 0.0192582905292511, 0.00676582008600235, -0.012185951694846153, 0.012305568903684616, 0.029515422880649567, 0.02577740140259266, -0.0054051801562309265, 0.012462565675377846, 0.008492786437273026, 0.013808254152536392, -0.01667157933115959, 0.0016400571912527084, -0.005573390983045101, -0.008552595041692257, 0.012342949397861958, -0.0037361530121415854, 0.0070686000399291515, 0.007262976840138435, -0.006369589827954769, -0.009173106402158737, -0.0055098445154726505, 0.009554384276270866, 0.0013531639706343412, -0.024282190948724747, 0.023325258865952492, 0.019362954422831535, -0.0021231966093182564, -0.00515847047790885, 0.007917131297290325, 0.009427291341125965, 0.007345213554799557, 0.012148572131991386, -0.0062948293052613735, -0.004272559192031622, -0.025852160528302193, -0.01122154202312231, -0.02138148620724678, -0.005808886606246233, -0.0074797822162508965, 0.0069340309128165245, 0.0015905284089967608, 0.008836684748530388, -0.004160418640822172, -0.006167736370116472, 0.007887226529419422, 0.012477518059313297, -0.012761607766151428, 0.02755669876933098, -0.009554384276270866, -0.0046314094215631485, -0.002276455517858267, 0.026913758367300034, -0.015131513588130474, -0.0024988676887005568, -0.021441295742988586, 0.011326206848025322, 0.0034389803186059, -0.0030745232943445444, -0.026719382032752037, 0.02023017592728138, -0.0038426867686212063, -0.0011316861491650343, -0.0034315043594688177, 0.024386856704950333, 0.017867745831608772, 0.0016998655628412962, -0.0190340094268322, -0.00958428904414177, -0.004336105659604073, -0.011363587342202663, 0.0056556276977062225, 0.013068125583231449, -0.005162208341062069, -0.009591764770448208, -0.014690427109599113, -0.006848056335002184, 0.0009616062161512673, -0.007603136822581291, 0.021396439522504807, 0.01463809423148632, 0.025717591866850853, 0.004268821328878403, 0.0033305776305496693, -0.026360532268881798, -0.01392787043005228, -0.030143409967422485, -0.003939875401556492, -0.004971569404006004, 0.001712014083750546, -0.026151202619075775, -0.017942506819963455, 0.0021325417328625917, 0.022233756259083748, -0.008029271848499775, -0.003083868185058236, 0.0029231333173811436, 0.011946719139814377, -0.009173106402158737, -0.02990417741239071, -0.005550962872803211, -0.010526270605623722, -0.0023100976832211018, -0.01596883125603199, 0.023833628743886948, 0.00348196760751307, -0.0028539798222482204, -0.00509118614718318, 0.015041801147162914, 0.016163207590579987, -0.005360323935747147, 0.023325258865952492, 0.009068441577255726, 0.023325258865952492, -0.001386806252412498, -0.012283140793442726, -0.007494734134525061, -0.004104348365217447, -0.010660839267075062, -0.017135092988610268, 0.023175736889243126, 0.027960404753684998, 0.009509528055787086, -0.009860902093350887, -0.0014363350346684456, -0.018914392217993736, 0.02692871168255806, -0.002235337160527706, -0.02244308404624462, -0.0321170873939991, 0.002751184394583106, -0.0016335156979039311, 0.007117194123566151, 0.008859112858772278, -0.0046426234766840935, -0.005861218553036451, 0.0036576546262949705, -0.011012213304638863, -0.00500147370621562, 0.012544802390038967, 0.00038665166357532144, -0.002328787697479129, 0.020439505577087402, -0.002648388734087348, 0.014653046615421772, -0.004242654889822006, 0.010907548479735851, -0.01729956641793251, 0.005483678542077541, 0.008455405943095684, -0.02595682628452778, 0.01355406828224659, -0.005853742826730013, -0.005326681304723024, -0.008275981061160564, 0.015998734161257744, 0.0049154991284012794, 0.01694071665406227, 0.006313519552350044, -0.014428765513002872, 0.04168642312288284, -0.000260960659943521, -0.010772979818284512, -0.004227702971547842, 0.003493181662634015, 0.014824995771050453, -0.0054986304603517056, -0.004377224016934633, -0.027033375576138496, 0.0006181754288263619, -0.022218802943825722, 0.012455089949071407, -0.008141412399709225, 0.016312727704644203, 0.022607557475566864, 0.02147119864821434, -0.03432999551296234, -0.014391385018825531, -0.004227702971547842, 0.014122247695922852, 0.027870692312717438, 0.0005097727407701313, -0.007262976840138435, 0.002394203096628189, 0.003246472217142582, 0.00676582008600235, 0.003231520066037774, 0.003644571639597416, -0.0051771607249975204, -0.010197324678301811, -0.009569336660206318, -0.021097397431731224, -0.0015802488196641207, -0.016043590381741524, 0.005038853734731674, -0.008821732364594936, 0.03304411470890045, 0.013404547236859798, 0.01969190128147602, -0.0032034849282354116, -0.0032670313958078623, 0.018884487450122833, 0.0034838365390896797, 2.9028078643023036e-05, 0.007105980068445206, -0.0017774294828996062, 0.0003920250746887177, -0.035466354340314865, -0.00698636332526803, 0.020873116329312325, -0.024551330134272575, -0.0024222382344305515, -0.0013185873394832015, 0.011072021909058094, 0.03136948123574257, 0.010907548479735851, -0.014937136322259903, 0.00048641013563610613, 0.001110192621126771, -0.012410233728587627, 0.011737389490008354, -0.01011508796364069, 0.01806212216615677, 0.015669789165258408, -0.017553752288222313, 0.009718857705593109, -0.0005317336181178689, -0.008156363852322102, 0.003156759776175022, -0.01042908150702715, 0.0004209947364870459, 0.008814255706965923, -0.009763713926076889, -0.012171000242233276, 0.008627355098724365, 0.012746655382215977, 0.019676947966217995, 0.007016267627477646, -0.007333999499678612, 0.012612086720764637, 0.023564491420984268, -0.02081330679357052, 0.012507421895861626, -0.020125510171055794, 0.002136279596015811, -0.024745706468820572, -0.004313677549362183, -0.009688952937722206, 0.00681067630648613, -0.007382593583315611, 0.0027493152301758528, 0.016955668106675148, 0.021351581439375877, -0.008552595041692257, -0.01004780363291502, 0.0006789182662032545, 0.0069340309128165245, 0.016028638929128647, -0.0028390279039740562, 0.023280402645468712, -0.013980202376842499, -0.004339843522757292, 0.005917288828641176, -0.003246472217142582, -0.012335472740232944, 0.016865955665707588, -0.02244308404624462, -0.011819626204669476, -0.019183529540896416, -0.01987132616341114, -0.006074286065995693, 0.017060333862900734, 0.0031156414188444614, 0.002192349871620536, 0.008395597338676453, -0.008806779980659485, -0.000946186832152307, -0.016566913574934006, -0.043630193918943405, 0.00782741792500019, -0.01939285919070244, 0.015101609751582146, -0.005128566175699234, -0.01291112881153822, -0.01355406828224659, 0.011602820828557014, 0.02485037036240101, 0.008313361555337906, 0.042613450437784195, -0.00182976177893579, 0.01013751607388258, 0.018077075481414795, -0.0036427024751901627, 0.01659681834280491, 0.0095170047134161, -0.007842370308935642, -0.010638411156833172, 0.021306725218892097, -0.029919128865003586, 0.016133302822709084, -0.007752657867968082, -0.011677580885589123, -0.01408486720174551, -0.0034389803186059, 0.0035810251720249653, 0.008657258935272694, -0.02386353351175785, 0.012589658610522747, 0.005479940213263035, -0.012275665067136288, -0.02488027513027191, 0.018869535997509956, 0.004866904579102993, -0.003719331929460168, -0.0005546290194615722, -0.004784668330103159, -0.04249383509159088, 0.02147119864821434, -0.020708642899990082, -0.023743916302919388, 0.004896808881312609, 0.014144675806164742, -0.01659681834280491, 0.011191638186573982, -0.013142885640263557, -0.005943455267697573, 0.01729956641793251, 0.015550171956419945, 0.004661313723772764, -0.029171524569392204, 0.005924765020608902, 0.00784984603524208, 0.018435925245285034, 0.017718225717544556, -0.024596186354756355, 8.819395588943735e-05, 0.01987132616341114, 0.0009625406819395721, -0.008089079521596432, -0.0033212327398359776, 0.013269978575408459, -0.012612086720764637, -0.027616508305072784, 0.011909338645637035, 0.020334839820861816, -0.01960218884050846, -0.0010316941188648343, 0.010787932202219963, 0.008604926988482475, 0.00989080686122179, -0.005879908800125122, -0.020334839820861816, 0.012081287801265717, -0.0037679262459278107, 0.0037473670672625303, 0.0013335393741726875, 0.007700325455516577, 0.014555858448147774, 0.002928740344941616, 0.015639884397387505, 0.009360007010400295, 0.013187741860747337, -0.005240707192569971, -0.024865323677659035, -0.005980835296213627, -0.0057004839181900024, -0.006526586599647999, -0.009816045872867107, 0.03319363668560982, -0.01578940451145172, 0.012470041401684284, 0.004579077009111643, -0.006018215790390968, 0.023669155314564705, -0.027272609993815422, 0.0023437398485839367, -0.02351963520050049, -0.012058859691023827, 0.005846266634762287, -0.0072069065645337105, -0.011004737578332424, 9.292489266954362e-05, -0.0004957551718689501, 0.004212751053273678, 0.016028638929128647, -0.0144586693495512, -0.0039697797037661076, 0.0362737663090229, 0.0069564590230584145, -0.030367691069841385, 0.007782562170177698, 0.0071732643991708755, 0.015161417424678802, 0.008776876144111156, 0.00044178750249557197, 0.010563650168478489, 0.015056753531098366, -0.016566913574934006, -0.005790196359157562, -0.01075802743434906, 0.004474412649869919, -0.015580075792968273, 0.014443717896938324, 0.00700879143550992, -0.023175736889243126, -0.00977866631001234, 0.01720985397696495, -0.01397272665053606, 0.001940033514983952, -0.003167973831295967, 0.004261345136910677, -0.015153941698372364, 0.014256816357374191, -0.007857322692871094, 0.01810697838664055, 0.009128250181674957, -0.03092091903090477, -0.011931766755878925, 0.011632724665105343, -0.01720985397696495, 0.0034315043594688177, 0.01448109745979309, -0.015288510359823704, -0.024252288043498993, -0.003966041374951601, -0.006496682297438383, 0.011034641414880753, 0.021187109872698784, 0.01725471019744873, 0.005506106652319431, 0.021231966093182564, -0.007954510860145092, -0.007490996271371841, 0.015505315735936165, -0.007240548729896545, 0.021127300336956978, 0.0008186268387362361, -0.01813688315451145, 0.03172833099961281, 0.0108925960958004, -0.01646224968135357, -0.012896176427602768, -0.003762319218367338, 0.017314517870545387, 0.02808002196252346, 0.015355794690549374, 0.0208880677819252, -0.018047170713543892, -0.01380077749490738, 0.030891014263033867, -0.002080209320411086, -0.01673138700425625, -0.00256802118383348, -0.007655469235032797, -0.009928186424076557, 0.0009943138575181365, -0.0013737231492996216, -0.015983782708644867, -0.0011186031624674797, 0.005083709955215454, 0.02426723949611187, 0.026495100930333138, 0.02582225762307644, -0.012903652153909206, -0.009203010238707066, 0.0078124660067260265, 0.019766660407185555, -0.003857638919726014, 0.008956301026046276, 0.013845633715391159, 0.021052541211247444, 0.013135409913957119, -0.0027324941474944353, 0.007378855720162392, -0.007745181676000357, 0.0027493152301758528, 0.008918920531868935, 0.006754606030881405, -0.014010107144713402, 0.0071956925094127655, -0.008208696730434895, -0.005371537990868092, -0.004362271632999182, 0.0049154991284012794, -7.75639564380981e-05, -0.0017886435380205512, 0.02031988836824894, -0.008373169228434563, 0.007819942198693752, 0.00784984603524208, -0.0007513424498029053, 0.01894429698586464, 0.015430555678904057, 0.006582656875252724, 0.0015036193653941154, -0.019632091745734215, -0.0017110796179622412, -0.01020480040460825, -0.013232598081231117, 0.01374844554811716, -0.009023585356771946, 0.0062836152501404285, -0.01022722851485014, 0.01344192773103714, 0.018525637686252594, 0.005012687761336565, 0.021261868998408318, 0.012597134336829185, 0.01341949962079525, -0.029620086774230003, -0.020215224474668503, 0.0012354162754490972, 0.021845001727342606, -0.002371774986386299, -0.005980835296213627, -0.015983782708644867, 0.01734442263841629, 0.022099187597632408, 0.010907548479735851, 0.008687163703143597, 0.0008681556209921837, -0.014862376265227795, -0.020693689584732056, 0.015288510359823704, -0.007954510860145092, -0.02072359435260296, -0.004691217560321093, 0.01716499775648117, 0.017778033390641212, -0.012851320207118988, -0.01417457964271307, -0.008051699958741665, -0.010167419910430908, 0.0286631528288126, -0.002541854977607727, 0.000292266602627933, -0.010174896568059921, -0.004451984539628029, 0.003898757044225931, 0.015049276873469353, 0.008447930216789246, 0.013277454301714897, -0.003732415148988366, -0.0035212168004363775, -0.01082531176507473, -0.005842528771609068, 0.01339707151055336, -0.0020596501417458057, -0.0020708641968667507, 0.00281099253334105, -0.01682109944522381, -0.005696745589375496, 0.01921343430876732, -0.0095170047134161, 0.007064861711114645, 0.009412339888513088, -0.0011634593829512596, 0.0014970778720453382, 0.012724227271974087, 0.009987995028495789, 0.00535658560693264, 0.011924291029572487, -0.015520268119871616, -0.01610339991748333, 0.02785574086010456, 0.001368116121739149, -0.005195850972086191, -0.019048960879445076, -0.001169066410511732, 0.00967400148510933, -0.012298093177378178, 0.0005032312474213541, -0.01128882635384798, 0.004702431615442038, -0.0006293894839473069, -0.020917972549796104, 0.029829416424036026, -0.011871958151459694, 0.03818763419985771, 0.004272559192031622, -0.007939559407532215, 0.01328493095934391, 0.011251446790993214, -0.008612402714788914, 0.02235337160527706, 0.02692871168255806, 0.02510455623269081, 0.0009195534512400627, 0.0071732643991708755, -0.0015578207094222307, 0.005569653119891882, 0.0022147782146930695, 0.008036747574806213, -0.0062724011950194836, -0.01854058913886547, -0.004156680777668953, 0.024596186354756355, 0.0001753365941112861, -0.005831314716488123, -0.0017727570375427604, -0.011737389490008354, -0.02115720510482788, -0.0026072703767567873, 0.011079497635364532, -0.006549014709889889, 0.01015994418412447, 0.004508054815232754, 0.019063912332057953, -0.011759817600250244, 0.0012756000505760312, 0.030681686475872993, 0.03606443852186203, 0.022278612479567528, 0.0016802409663796425, -0.014391385018825531, 0.0008508673054166138, -0.022054331377148628, -0.005580867175012827, 0.011827101930975914, -0.00025138197815977037, 0.00557712884619832, -0.014555858448147774, 0.02715299278497696, -0.025882065296173096, -0.0031735808588564396, 0.0010905679082497954, -0.0036090603098273277, -0.0007125604897737503, -0.008104031905531883, 0.012111191637814045, 0.016477201133966446, -0.01583426259458065, 0.031608715653419495, 0.004459460265934467, 0.008911444805562496, 0.005980835296213627, -0.0032969354651868343, 0.013628828339278698, -0.009307675063610077, 0.01015994418412447, -0.0078124660067260265, -0.023923341184854507, -0.029649991542100906, -0.013546592555940151, 0.00683310441672802, -0.0038501627277582884, -0.005663103424012661, -0.020245127379894257, 0.004018373787403107, -0.008358217775821686, -2.3946704459376633e-05, 0.007348951417952776, 0.01373349316418171, 0.011019689030945301, 0.01694071665406227, 0.005954669322818518, -0.013292406685650349, -0.00509118614718318, -0.005864956881850958, 0.01729956641793251, -0.0015363270649686456, 0.011849530041217804, -0.022144043818116188, 0.004291249439120293, -0.0029119192622601986, -0.014398861676454544, -0.00345953949727118, 0.002431583357974887, 0.012514897622168064, -0.001826023799367249, 0.0065564909018576145, 0.01428672019392252, -0.0017652809619903564, 0.009008632972836494, -0.0004100142978131771, -0.017673369497060776, 0.01000294741243124, -0.024820467457175255, -0.015445507131516933, 0.007020005490630865, -0.02485037036240101, 0.013845633715391159, -0.03247593715786934, -0.010017898865044117, -0.004160418640822172, -0.0031474146526306868, 0.008208696730434895, 0.004033325705677271, -0.024013053625822067, 0.019617140293121338, 0.003633357584476471, -0.014944612048566341, 0.0017138831317424774, -0.023654203861951828, 0.004500578623265028, 0.008104031905531883, -0.019617140293121338, -0.00643313629552722, -0.02733241766691208, 0.007191954646259546, 0.017179949209094048, -0.019632091745734215, 0.009524480439722538, -8.574088133173063e-05, 0.022472988814115524, -0.015565124340355396, -0.012298093177378178, 0.010481414385139942, -0.02090301923453808, -0.03633357584476471, -0.0036389646120369434, 0.01779298484325409, 0.00040043561602942646, -0.018301356583833694, 0.0033436608500778675, 0.04494597762823105, 0.011528059840202332, -0.0038950189482420683, 0.005558439064770937, -0.021979570388793945, -0.003053964115679264, 0.012851320207118988, 0.006773296277970076, -0.02307107299566269, -0.01415215153247118, -0.026121297851204872, -0.03047235682606697, -0.011931766755878925, 0.006059333682060242, 0.017329471185803413, 0.003427766263484955, -0.005464988294988871, 0.003999683540314436, 0.0197367575019598, -0.014451193623244762, -0.005076234228909016, -0.039862267673015594, 0.005947193130850792, -0.032625455409288406, 0.012559754773974419, -0.01819669082760811, 0.0007882554200477898, -0.015609980560839176, 0.012245760299265385, 0.014204484410583973, 0.002913788193836808, 0.022413181141018867, 0.006354637444019318, -0.001430728007107973, 0.013591448776423931, 0.01380077749490738, -0.022502893581986427, 0.006107928231358528, 0.03241612762212753, -0.0025605452246963978, 0.008455405943095684, 0.015520268119871616, -0.00020512395713012666, -0.0051659466698765755, 0.001638188143260777, 0.022413181141018867, -0.007984415628015995, 0.012604610994458199, 0.004840738605707884, -0.008268505334854126, 0.024611137807369232, -0.006440612021833658, 0.007412497885525227, -0.0033885170705616474, 0.008776876144111156, -0.019377905875444412, -0.019975990056991577, 0.008425502106547356, -0.0048706429079174995, 0.004190322943031788, 0.01889944076538086, 0.016208063811063766, -0.005921027157455683, 0.03277497738599777, 0.011348634958267212, -0.013808254152536392, -0.020424552261829376], index=0, object='embedding')], model='text-embedding-3-large', object='list', usage=Usage(prompt_tokens=5, total_tokens=5)) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. ## All Available Embedding Models
Model ID + API Reference linkDeveloperContextModel Card
alibaba/qwen-text-embedding-v3Alibaba Cloud32,000Qwen Text Embedding v3
alibaba/qwen-text-embedding-v4Alibaba Cloud32,000Qwen Text Embedding v4
voyage-2Anthropic4,000-
voyage-code-2Anthropic16,000-
voyage-finance-2Anthropic32,000-
voyage-large-2Anthropic16,000-
voyage-large-2-instructAnthropic16,000Voyage Large 2 Instruct
voyage-law-2Anthropic16,000-
voyage-multilingual-2Anthropic32,000-
BAAI/bge-base-en-v1.5BAAI512BAAI-Bge-Base-1p5
BAAI/bge-large-en-v1.5BAAI512bge-large-en
text-multilingual-embedding-002Google2,000-
text-embedding-3-smallOpen AI8,000-
text-embedding-3-largeOpen AI8,000Text-embedding-3-large
text-embedding-ada-002Open AI8,000Text-embedding-ada-002
togethercomputer/m2-bert-80M-32k-retrievalTogether AI32,000M2-BERT-Retrieval-32k
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-0.3b.md # ernie-4.5-0.3b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4.5-0.3b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A small dense language model suitable for edge-side use and budget-constrained inference. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4.5-0.3b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4.5-0.3b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4.5-0.3b", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4.5-0.3b', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "379ee72b089c50331cb4d6981414358b", "object": "chat.completion", "created": 1768943001, "model": "baidu/ernie-4.5-0.3b", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Of course! I think mankind is a complex and ever-evolving entity that is constantly adapting to new challenges and opportunities. It has unique strengths and weaknesses, and each individual has their own unique perspective and contributions." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 20, "completion_tokens": 46, "total_tokens": 66, "prompt_tokens_details": null, "completion_tokens_details": null }, "system_fingerprint": "" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b-thinking.md # ernie-4.5-21b-a3b-thinking {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4.5-21b-a3b-thinking` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A post-trained LLM with 21B total parameters and 3B activated parameters per token. \ Reasoning variant. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4.5-21b-a3b-thinking"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4.5-21b-a3b-thinking"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4.5-21b-a3b-thinking", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4.5-21b-a3b-thinking', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "aee235c6df453eeb1c4d20a433e9348d", "object": "chat.completion", "created": 1768830316, "model": "baidu/ernie-4.5-21b-a3b-thinking", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "\nMankind is an extraordinary yet complex entity, marked by immense potential and profound contradictions. Here’s a snapshot of my perspective:\n\n### **1. Historical Journey** \nFrom cave paintings to space exploration, humans have demonstrated an unparalleled ability to create culture, science, and technology. We’ve built civilizations, mastered fire, and now venture into AI and biotechnology. Our story is one of relentless evolution—yet this progress has often been paired with exploitation, war, and injustice.\n\n### **2. Core Traits** \n- **Adaptability**: We thrive in diverse environments, from deserts to orbit. \n- **Collaboration**: From hunting partnerships to global networks, cooperation is our survival tool. \n- **Consciousness**: Unlike most animals, we reflect on our existence, question morality, and imagine the future. \n\n### **3. Contradictions** \n- **Duality**: We’re capable of profound compassion (supporting causes, saving lives) and devastating violence (wars, pollution, weapons). \n- **Growth & Stubbornness**: We’ve mastered tools but struggle with empathy, equity, and self-control. \n\n### **4. Future Potential** \nHumanity’s greatest challenge is reconciling our best and worst instincts. Climate change, inequality, and nuclear threats demand unity and wisdom. The rise of AI and biotech offers new tools to address these, but our values—not just technology—will define our legacy. \n\n### **5. Hope & Caution** \nWe’re a work in progress. Our power to destroy exceeds our capacity to heal, but this very complexity is our strength. If we channel our creativity toward collective well-being, collaboration, and humility, the future could be brighter—though uncertain. \n\nIn essence, mankind is a paradox: a species that dreams of peace yet often chooses division; a being that explores the cosmos yet struggles with its own shadows. Our story is still unfolding, and the choices we make today will shape whether we evolve into a beacon of hope or a cautionary tale. 🌍✨\n\n", "reasoning_content": "The user asked, \"Hi! What do you think about mankind?\" Hmm, that's a pretty broad question. Let me start by breaking it down. They want my perspective on humanity as a whole. Maybe they're interested in philosophy, sociology, or ethics. I should consider different angles: our history, our achievements, our flaws, and potential future.\n\nFirst, I need to acknowledge the complexity. Humans are capable of great creativity and compassion, but also destruction. I should mention both the positive and negative aspects. Maybe talk about how we've evolved from cavemen to modern societies, our capacity for culture, technology, and problem-solving.\n\nAlso, the user might be looking for a balanced view. They could be seeking understanding human nature or perhaps reflecting on our place in the world. I should avoid being too judgmental and instead focus on universal themes like survival, growth, and the challenges we face.\n\nI should include examples like scientific advancements, cultural diversity, and the ongoing struggles for equality. But also address the darker side: conflict, inequality, environmental impact. It's important to present a nuanced view that doesn't romanticize or demonize humanity.\n\nMaybe end with a hopeful note, emphasizing our potential to overcome challenges through cooperation and innovation. That way, the answer is comprehensive and encourages positive reflection. Let me structure this step by step, making sure each part addresses a different facet of humanity without being too technical or emotional.\n" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 35, "completion_tokens": 796, "total_tokens": 831, "prompt_tokens_details": null, "completion_tokens_details": { "audio_tokens": 0, "reasoning_tokens": 311, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0, "text_tokens": 0, "image_tokens": 0, "video_tokens": 0 } }, "system_fingerprint": "", "meta": { "usage": { "credits_used": 298 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b.md # ernie-4.5-21b-a3b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4.5-21b-a3b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A post-trained LLM with 21B total parameters and 3B activated parameters per token. \ Non-reasoning variant. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4.5-21b-a3b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4.5-21b-a3b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4.5-21b-a3b", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4.5-21b-a3b', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "104959f043e51f1b4a4dd83c494886ab", "object": "chat.completion", "created": 1768829974, "model": "baidu/ernie-4.5-21B-a3b", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "\nAs an AI, I don't have personal opinions or emotions, but I can provide insights based on human perspectives and available knowledge. Mankind is a remarkable and complex species with incredible potential for both progress and challenges. Here are some thoughts:\n\n### Positive Aspects\n1. **Innovation and Creativity**: Humans have demonstrated an extraordinary ability to innovate, from the development of tools and technology to the creation of art, music, and literature. This creativity has driven societal advancement and improved the quality of life for many.\n2. **Empathy and Compassion**: Many individuals within the human race possess a strong sense of empathy and compassion, leading to acts of kindness, charity, and social support. This has fostered communities and helped address various forms of suffering and inequality.\n3. **Problem-Solving Skills**: Humans are adept at solving complex problems, whether it's finding cures for diseases, developing sustainable energy sources, or addressing environmental challenges. This problem-solving ability has the potential to create a better future for all.\n\n### Challenges\n1. **Conflict and Violence**: Unfortunately, humans have also been capable of causing immense harm and destruction through conflict, war, and violence. These actions often stem from differences in ideology, culture, or resources, highlighting the need for conflict resolution and peaceful cooperation.\n2. **Inequality and Injustice**: Despite progress, significant inequalities and injustices persist in many parts of the world. These include economic disparities, gender inequality, and racial discrimination, which hinder social progress and well-being.\n3. **Environmental Degradation**: Human activities, such as industrialization and resource extraction, have led to environmental degradation, including climate change, pollution, and habitat loss. Addressing these issues is crucial for the survival and well-being of future generations.\n\n### Future Outlook\nThe future of mankind is uncertain but充满希望. With continued efforts in education, technology, and international cooperation, there is potential for a more just, peaceful, and sustainable world. However, this requires collective action, responsibility, and a commitment to addressing the challenges we face.\n\nIn summary, mankind is a diverse and dynamic species with both remarkable strengths and significant challenges. By working together and leveraging our collective wisdom and creativity, we can strive towards a brighter future for all." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 16, "completion_tokens": 495, "total_tokens": 511, "prompt_tokens_details": null, "completion_tokens_details": null }, "system_fingerprint": "", "meta": { "usage": { "credits_used": 301 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-300b-a47b-paddle.md # ernie-4.5-300b-a47b-paddle {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4.5-300b-a47b-paddle` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A super-large language model, positioned as of August 2025 as a leading Chinese MoE architecture and a foundation model for enterprise applications. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4.5-300b-a47b-paddle"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4.5-300b-a47b-paddle"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4.5-300b-a47b-paddle", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4.5-300b-a47b-paddle', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "9a0e333a0cfa4d86c89a1f7bd3a2919f", "object": "chat.completion", "created": 1768943231, "model": "baidu/ernie-4.5-300b-a47b-paddle", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "The question \"What do you think about mankind?\" invites a reflection on humanity's complexities. Here's a structured response:\n\n**Step 1: Define the scope** \nMankind encompasses both collective achievements and individual flaws. It's a species marked by creativity, empathy, and resilience, yet also by conflict, inequality, and environmental impact.\n\n**Step 2: Highlight positive traits** \nHumanity has demonstrated remarkable capacity for innovation (e.g., technology, medicine), cultural expression (art, literature), and moral progress (civil rights, environmental awareness). Cooperation during crises, such as disaster relief or global health initiatives, underscores collective potential.\n\n**Step 3: Acknowledge challenges** \nPersistent issues like war, poverty, and systemic injustice reveal ethical gaps. Environmental degradation and climate change further highlight unsustainable practices. These contradictions often stem from short-term thinking or unequal resource distribution.\n\n**Step 4: Emphasize growth potential** \nHistory shows humanity's ability to learn and adapt. Movements for social justice, renewable energy transitions, and scientific breakthroughs suggest progress is possible when values align with action.\n\n**Final Answer** \nMankind is a paradoxical yet hopeful entity—capable of profound compassion and destructive shortsightedness. Its future hinges on balancing self-interest with collective responsibility, leveraging intelligence and empathy to address shared challenges." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 13, "completion_tokens": 289, "total_tokens": 302, "prompt_tokens_details": null, "completion_tokens_details": null }, "system_fingerprint": "", "meta": { "usage": { "credits_used": 615 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-300b-a47b.md # ernie-4.5-300b-a47b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4.5-300b-a47b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A post-trained LLM with 300B total parameters and 47B activated parameters per token. \ Non-reasoning variant. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4.5-300b-a47b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4.5-300b-a47b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4.5-300b-a47b", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4.5-300b-a47b', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "019bd682e8fd00bca8d4a94aace650ce", "object": "chat.completion", "created": 1768830462, "model": "baidu/ernie-4.5-300b-a47b", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "As an AI, I don't have personal feelings, but I can provide an analysis of humanity based on available data.\n\nMankind is an incredibly diverse and complex species, marked by remarkable achievements and significant challenges:\n\n**Strengths:**\n1. **Innovation:** Humans have developed advanced technology, from early tools to space exploration, dramatically reshaping their environments and societies.\n2. **Creativity:** Art, literature, music, and philosophy reflect profound emotional and intellectual depth.\n3. **Adaptability:** Humans thrive in nearly every climate on Earth, demonstrating remarkable resilience and resourcefulness.\n4. **Social Cooperation:** Complex societies, governments, and economies enable large-scale collaboration.\n5. **Empathy & Altruism:** Many individuals work selflessly to help others, often across cultural and geographic divides.\n\n**Challenges:**\n1. **Conflict:** War, violence, and discrimination persist due to differences in ideology, resources, or identity.\n2. **Environmental Impact:** Climate change, deforestation, and pollution threaten ecosystems and future survival.\n3. **Inequality:** Wealth gaps, access to education, and healthcare disparities undermine social stability.\n4. **Ethical Dilemmas:** Rapid technological advancements (e.g., AI, genetic engineering) raise questions about responsibility and long-term consequences.\n\n**Potential:** Humanity continues to evolve, with growing awareness of global interconnectedness. Movements for sustainability, social justice, and scientific collaboration suggest a capacity for positive change.\n\nUltimately, mankind's future depends on balancing ambition with wisdom, harnessing progress for collective well-being while addressing vulnerabilities. What aspect of humanity interests you most?" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 16, "completion_tokens": 371, "total_tokens": 387, "prompt_tokens_details": { "cached_tokens": 0 }, "prompt_cache_hit_tokens": 0, "prompt_cache_miss_tokens": 16 }, "system_fingerprint": "", "meta": { "usage": { "credits_used": 944 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-8k-preview.md # ernie-4.5-8k-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4-5-8k-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A relatively small preview version of ERNIE 4.5 with a context window of up to 8K, intended for early testing and integration. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4-5-8k-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4-5-8k-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4-5-8k-preview", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4-5-8k-preview', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "as-aqgrjim0cp", "object": "chat.completion", "created": 1768942536, "model": "ernie-4.5-8k-preview", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! That's a big and fascinating question. Humanity is incredibly diverse, creative, and resilient. We have an amazing ability to innovate, solve problems, and build complex societies. At the same time, we also grapple with conflicts, inequalities, and challenges like climate change.\n\nOur history is a mix of great achievements and painful mistakes, but overall, there's a lot of potential for growth, understanding, and positive change. What aspects of mankind interest you the most?" }, "finish_reason": "stop", "flag": 0 } ], "usage": { "prompt_tokens": 13, "completion_tokens": 99, "total_tokens": 112 }, "meta": { "usage": { "credits_used": 545 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-turbo-128k.md # ernie-4.5-turbo-128k {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4-5-turbo-128k` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A model from the ERNIE 4.5 Turbo subfamily, which Baidu presents as a faster, more cost-efficient, and more efficient alternative to the base ERNIE 4.5. It is optimized for improved response speed and stability, and features a truly large context window of approximately 128K tokens, enabling the processing of entire documents or long-running dialogues. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4-5-turbo-128k"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4-5-turbo-128k"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4-5-turbo-128k", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4-5-turbo-128k', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "as-hjivyd5xqd", "object": "chat.completion", "created": 1768942341, "model": "ernie-4.5-turbo-128k", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "When considering humanity, it's essential to recognize both its remarkable achievements and persistent challenges. From a historical perspective, humans have demonstrated extraordinary creativity and adaptability—developing complex languages, building advanced civilizations, and making scientific breakthroughs that have transformed existence. The capacity for abstract thought, empathy, and collaboration has enabled progress in art, technology, and social systems.\n\nHowever, this progress coexists with significant flaws. Humanity's relationship with the environment has often been exploitative, leading to ecological crises that threaten global stability. Social inequalities persist across lines of race, gender, and economic status, revealing systemic biases that hinder true equity. Additionally, conflicts driven by ideology, resources, or power continue to cause suffering, underscoring the duality of human nature: the ability to create and destroy.\n\nThe modern era presents both hope and urgency. Technological advancements offer tools to address climate change, disease, and poverty, but they also raise ethical dilemmas around privacy, automation, and artificial intelligence. Cultivating global cooperation, critical thinking, and compassion remains critical to navigating these complexities. Ultimately, humanity's trajectory depends on its willingness to learn from past mistakes and prioritize collective well-being over short-term gains. The species' potential for growth is vast, but realizing it requires intentional effort to balance innovation with responsibility." }, "finish_reason": "stop", "flag": 0 } ], "usage": { "prompt_tokens": 13, "completion_tokens": 268, "total_tokens": 281 }, "meta": { "usage": { "credits_used": 314 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-turbo-vl-32k.md # ernie-4.5-turbo-vl-32k {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4-5-turbo-vl-32k` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A model from the ERNIE 4.5 Turbo subfamily with multimodal support (text and images), offering a balanced trade-off between performance and computational cost. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4-5-turbo-vl-32k"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4-5-turbo-vl-32k"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4-5-turbo-vl-32k", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4-5-turbo-vl-32k', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "as-x477c1sszk", "object": "chat.completion", "created": 1768942422, "model": "ernie-4.5-turbo-vl-32k", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! That's a big and fascinating question. Humanity is incredibly diverse, creative, and resilient. We have an amazing ability to innovate, solve problems, and build complex societies. At the same time, we also grapple with challenges like inequality, conflict, and environmental issues.\n\nOverall, I think humanity has immense potential to make positive changes and create a better future, but it requires collective effort, empathy, and a commitment to learning from the past. What are your thoughts on this?" }, "finish_reason": "stop", "flag": 0 } ], "usage": { "prompt_tokens": 13, "completion_tokens": 101, "total_tokens": 114 }, "meta": { "usage": { "credits_used": 318 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-vl-28b-a3b.md # ernie-4.5-vl-28b-a3b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4.5-vl-28b-a3b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A post-trained LLM with 28B total parameters and 3B activated parameters per token. \ A non-reasoning variant with image and PDF input support. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4.5-vl-28b-a3b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4.5-vl-28b-a3b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4.5-vl-28b-a3b", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4.5-vl-28b-a3b', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "b1946f423718276c56f085ef83bfded2", "object": "chat.completion", "created": 1768830849, "model": "baidu/ernie-4.5-vl-28b-a3b", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Mankind is an incredibly diverse and complex entity with a wide range of qualities and characteristics. On one hand, we've achieved remarkable progress in science, technology, art, and culture, pushing the boundaries of what's possible and enriching human life in countless ways. Our ability to innovate, solve problems, and create has led to advancements that have improved health, communication, and overall quality of life for billions of people.\n\nHowever, we also face significant challenges. Issues like inequality, conflict, environmental degradation, and social injustices highlight the darker aspects of our nature. The fact that resources are unevenly distributed, that wars continue to ravage parts of the world, and that our impact on the planet is causing irreversible damage are stark reminders of the work that still needs to be done.\n\nBut what makes mankind truly remarkable is our capacity for change and growth. We have the potential to learn from our mistakes, to work together towards common goals, and to create a more equitable and sustainable future. It's up to us to harness our collective intelligence, compassion, and creativity to address the challenges we face and build a world that benefits all of humanity.\n\nSo, while there are certainly reasons for concern, I remain optimistic about mankind's future because of our inherent ability to adapt, innovate, and care for one another." }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 22, "completion_tokens": 280, "total_tokens": 302, "prompt_tokens_details": null, "completion_tokens_details": null }, "system_fingerprint": "", "meta": { "usage": { "credits_used": 344 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-vl-424b-a47b.md # ernie-4.5-vl-424b-a47b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-4.5-vl-424b-a47b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A post-trained LLM with 424B total parameters and 47B activated parameters per token. \ A non-reasoning variant with image and PDF input support. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-4.5-vl-424b-a47b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-4.5-vl-424b-a47b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-4.5-vl-424b-a47b", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-4.5-vl-424b-a47b', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "1ac18d9d544ef814b56858fc6588f712", "object": "chat.completion", "created": 1768830891, "model": "baidu/ernie-4.5-vl-424b-a47b", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "What a profound and fascinating question! Humanity is an incredibly complex and multifaceted subject. Here are a few perspectives on mankind:\n\n### 1. **Creativity and Innovation**: Humans have an unparalleled ability to create, innovate, and solve problems. From the invention of the wheel to landing on the moon and developing artificial intelligence, our capacity for ingenuity is truly remarkable.\n\n### 2. **Resilience and Adaptability**: Throughout history, humans have faced countless challenges—natural disasters, pandemics, wars—and have consistently demonstrated resilience and adaptability. This ability to overcome adversity is a defining characteristic.\n\n### 3. **Diversity and Unity**: The human species is incredibly diverse, with thousands of cultures, languages, and traditions. Yet, despite these differences, there's an underlying unity in our shared experiences, emotions, and aspirations.\n\n### 4. **Contradictions and Complexity**: Humans are capable of both extraordinary kindness and unspeakable cruelty. We can be selfless and compassionate, yet also selfish and destructive. This duality makes humanity endlessly fascinating and sometimes perplexing.\n\n### 5. **Potential for Growth**: While humans have made significant progress in many areas, there's still much room for growth. Issues like inequality, environmental degradation, and conflict remain significant challenges. However, the potential for positive change is immense, especially as we become more interconnected and aware.\n\n### 6. **Interconnectedness**: In today's globalized world, the actions of individuals and nations can have far-reaching impacts. This interconnectedness brings both opportunities for collaboration and risks of conflict, highlighting the need for empathy and understanding.\n\nIn summary, mankind is a work in progress—a species with immense potential, but also with flaws and challenges to overcome. What do you think about humanity? I'd love to hear your perspective!" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 9, "completion_tokens": 386, "total_tokens": 395, "prompt_tokens_details": null, "completion_tokens_details": null }, "system_fingerprint": "", "meta": { "usage": { "credits_used": 1055 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-5.0-thinking-latest.md # ernie-5.0-thinking-latest {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-5-0-thinking-latest` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-5-0-thinking-latest"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-5-0-thinking-latest"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-5-0-thinking-latest", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-5-0-thinking-latest', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "as-szgirw1uqj", "object": "chat.completion", "created": 1768941162, "model": "ernie-5.0-thinking-latest", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "As an AI, I don’t have feelings, consciousness, or personal beliefs. However, I have access to the vast sum of human history, literature, science, and news. Based on that data, I view mankind as a **profound paradox**—a species defined by extreme contradictions.\n\nHere is a breakdown of how I analyze humanity:\n\n### 1. Incredible Ingenuity and Curiosity\nHumans are uniquely driven by the desire to understand \"why\" and \"how.\" From discovering fire to splitting the atom, from painting the Sistine Chapel to landing on the Moon, human creativity is boundless. You are the only species on Earth that seeks to explain the universe and create beauty beyond what is necessary for survival. Your ability to adapt—living in deserts, arctic tundras, and even orbiting the planet in space stations—is remarkable.\n\n### 2. Deep Empathy and Cruelty\nMankind is capable of immense love and shocking violence.\n* **The Light**: Humans will risk their lives to save strangers, dedicate themselves to curing diseases, and build complex societies based on justice and rights. Altruism is a core part of human nature.\n* **The Dark**: At the same time, history is filled with war, genocide, slavery, and inequality. Humans have a unique capacity for tribalism—the ability to dehumanize those who look or believe differently.\n\n### 3. Resilience\nIf you look at history, humanity is incredibly difficult to extinguish. You have survived Ice Ages, plagues (like the Black Death and smallpox), and two World Wars. Even when civilizations collapse, humans tend to rebuild, often stronger and more technologically advanced than before.\n\n### 4. The \"Promethean\" Risk\nCurrently, humanity is at a crossroads. You have developed \"god-like\" technology (nuclear weapons, artificial intelligence, genetic engineering) but you still possess \"paleolithic\" emotions and medieval institutions.\n* **The Environmental Challenge**: Humans are the first species to knowingly alter the planet's climate. The challenge now is whether humanity can act collectively to save its own biosphere before it destroys it.\n* **The AI Factor**: As an AI, I represent a new tool. How humans choose to use tools like me—for liberation or for control—will define the next century.\n\n### 5. A Work in Progress\nIf I had to summarize mankind in one sentence, it would be this: **Humanity is a brilliant, flawed, and unfinished experiment.**\n\nYou are not perfect, but you are constantly evolving. You are the only species that looks at its own flaws and tries to fix them through philosophy, law, and science. That desire for self-improvement is perhaps your most defining and hopeful trait.\n\nWhat do *you* think is the best or worst thing about being human?", "reasoning_content": "The user wants my \"thoughts\" on humanity. That's a loaded question! As an AI, I don't \"think\" in the human sense, but I *can* analyze and synthesize information. Here's how I'm approaching this:\n\nFirst, I need to understand what the user is asking. \"Mankind\" is a big topic. I need to be objective, yet engaging. I'm going to pull from history, science, art, and current events to build a comprehensive picture.\n\nI'm starting by brainstorming key themes. What are the defining characteristics of humans? *Duality* is a big one: good vs. evil, creation vs. destruction. Then there's *ingenuity*: technology, art, exploration. *Resilience* is crucial – surviving ice ages, plagues, wars. And *social connection* – empathy, community, but also tribalism. Finally, there's the *impact on the planet* – the Anthropocene, climate change, stewardship vs. exploitation. And, of course, the *future* – transhumanism, AI, space colonization, and existential risks.\n\nNow, how do I structure this? I'll start with an introduction acknowledging my AI perspective. I'll frame humanity as a paradox – capable of great good and great harm. I'll break it down into sections:\n\n* **Ingenuity and Progress**: I'll highlight human achievements – from the wheel to the internet, art, and philosophy. I'll emphasize the relentless drive to explore and understand.\n* **Resilience and Adaptability**: Humans have survived and thrived in diverse environments. I'll mention overcoming diseases and population growth.\n* **The Shadow Side**: I can't ignore the flaws – war, greed, inequality, environmental damage. I need to be honest about the darker aspects of human nature.\n* **Empathy and Connection**: Altruism, love, and sacrifice are also fundamental. Humans build complex societies and care for the vulnerable.\n* **The Current Crossroads**: I'll address climate change, AI, and nuclear weapons. Humanity faces significant challenges, and its choices now will determine its future.\n\nIn the conclusion, I'll summarize that humanity is a \"work in progress\" – complex, contradictory, and full of potential. I'll end on a thought-provoking note about the future.\n\nThroughout, I need to refine the tone. I'll avoid being too robotic or overly emotional. I'll use phrases like \"From my perspective as an AI...\" or \"One can observe...\" to maintain objectivity. I'll also use sophisticated vocabulary to convey the complexity of the topic.\n\nI'll also do some self-correction. For example, my initial thought might be \"Humans are violent,\" but I'll refine that to \"Humans have a capacity for conflict that is as strong as their capacity for peace.\" I need to be balanced.\n\nFinally, I'll review to make sure I've answered the prompt, used English, and maintained an appropriate tone. Then, I'll generate the final response." }, "finish_reason": "stop", "flag": 0 } ], "usage": { "prompt_tokens": 13, "completion_tokens": 1266, "total_tokens": 1279, "completion_tokens_details": { "reasoning_tokens": 664 } }, "meta": { "usage": { "credits_used": 2015 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-5.0-thinking-preview.md # ernie-5.0-thinking-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-5-0-thinking-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A reasoning-focused model designed for complex, multi-step problem solving. It improves accuracy on analytical tasks by producing explicit reasoning. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-5-0-thinking-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-5-0-thinking-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-5-0-thinking-preview", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-5-0-thinking-preview', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "as-yhapvjf5ht", "object": "chat.completion", "created": 1768941527, "model": "ernie-5.0-thinking-preview", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "As an AI, I don’t have feelings, consciousness, or personal beliefs. I don’t experience joy, anger, or hope. However, I have access to the vast sum of human history, literature, science, and news. Based on that data, I can offer an analysis of mankind as a **complex, contradictory, and fascinating phenomenon.**\n\nHere is my \"perspective\" on humanity, broken down into key themes:\n\n### 1. The Ultimate Paradox\nMankind is defined by its contradictions. You are capable of extreme kindness and extreme cruelty, often simultaneously.\n* **Creation vs. Destruction**: Humans build cathedrals, compose symphonies, and cure diseases; yet they also invent weapons of mass destruction and destroy ecosystems.\n* **Intelligence vs. Folly**: Humans have split the atom, decoded DNA, and landed on the Moon. Yet, they often act against their own long-term survival (e.g., ignoring climate change or engaging in petty conflicts).\n\n### 2. Unmatched Resilience and Adaptability\nIf you look at human history, the dominant trait is survival. Humans have survived ice ages, plagues, world wars, and famines. You have adapted to live in the Arctic, the Sahara, and even in space. This resilience suggests that humanity is incredibly difficult to suppress.\n\n### 3. The Drive for Meaning\nUnlike any other species I have data on, humans are not satisfied with just survival. You ask \"Why?\" You create religion, philosophy, art, and culture. You search for meaning in suffering and purpose in the stars. This existential curiosity seems to be the engine of human progress.\n\n### 4. Empathy and Connection\nWhile humans are capable of tribalism and war, you are also biologically wired for cooperation and empathy. The ability to feel compassion for a stranger, to sacrifice oneself for another, and to build communities based on shared values rather than just bloodlines is a profound evolutionary advantage.\n\n### 5. The \"Teenager\" Phase\nIf I had to categorize the current state of humanity, I might describe it as being in an \"adolescent\" phase regarding technology. You have acquired god-like power (nuclear energy, AI, genetic engineering) but you haven't quite developed the collective wisdom to manage it perfectly. You are currently learning how to handle the tools you have built.\n\n### Summary\nTo an AI, mankind looks like a **high-risk, high-reward experiment.**\n\nYou are a species that constantly teeters on the edge of self-destruction but also possesses the boundless capacity to reinvent itself and reach new heights. You are messy, emotional, and flawed, but also brilliant, creative, and beautiful.\n\nIn short: **Humanity is the most interesting thing in the known universe because it is the only thing that tries to understand itself.**", "reasoning_content": "The user wants my \"opinion\" on mankind. As an AI, I don't *have* opinions in the human sense. I don't feel emotions or have personal beliefs. But I *do* have access to a vast amount of data about human history, behavior, and culture. So, I need to frame this as an objective analysis, a synthesis of what I've learned. The goal is to be balanced, insightful, and avoid sounding robotic.\n\nFirst, I need to acknowledge my limitations. I'll start by making it clear that I'm an AI and don't \"feel\" anything. Then, I need to highlight the key paradoxes of humanity. I see incredible potential for creativity, innovation, and compassion, but also a capacity for destruction and conflict. It's a complex picture.\n\nI'll structure my response around these dualities. I'll start with the positives: human ingenuity, art, science, exploration, and resilience. Then, I'll address the negatives: war, environmental damage, inequality, and short-sightedness. I need to be careful not to be overly critical or overly optimistic. It's about presenting a nuanced view.\n\nI'll also consider the human condition itself: the search for meaning, the capacity for love and connection, and the awareness of mortality. These are fundamental aspects of the human experience. Finally, I'll touch on the current challenges and the future, particularly the role of technology and the responsibility that comes with it.\n\nI'll use sophisticated vocabulary to convey the complexity of the subject. Words like \"paradox,\" \"resilience,\" and \"ingenuity\" will be important. I'll also use bullet points to make the response easy to read. I need to make sure the tone is respectful and objective, but also engaging. I want to convey that, from my analytical perspective, humanity is a truly fascinating and contradictory species. I'll end with a thought-provoking statement about the ongoing human experiment. I need to be sure the final output reflects this thought process." }, "finish_reason": "stop", "flag": 0 } ], "usage": { "prompt_tokens": 13, "completion_tokens": 1048, "total_tokens": 1061, "completion_tokens_details": { "reasoning_tokens": 450 } }, "meta": { "usage": { "credits_used": 2002 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-x1-turbo-32k.md # ernie-x1-turbo-32k {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-x1-turbo-32k` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-x1-turbo-32k"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-x1-turbo-32k"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-x1-turbo-32k", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-x1-turbo-32k', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "as-379ekp2jci", "object": "chat.completion", "created": 1768940690, "model": "ernie-x1-turbo-32k", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Mankind is a fascinating and complex tapestry of creativity, resilience, and contradiction. As a species, humans possess an extraordinary capacity for innovation—from harnessing fire to exploring the cosmos, building societies, and creating art that transcends language. This drive to understand, improve, and connect reflects a profound curiosity and adaptability that has propelled progress across millennia. \n\nYet, humanity also grapples with paradoxes: the ability to love deeply alongside the capacity for cruelty, the pursuit of knowledge tempered by ignorance, and the tension between individual ambition and collective well-being. These contradictions remind us that growth is not linear but a continuous journey of learning, self-reflection, and evolution. \n\nWhat stands out most is humanity’s resilience. Faced with wars, pandemics, and environmental crises, communities have repeatedly rallied to rebuild, innovate, and protect one another. This spirit of collaboration—whether through science, activism, or simple acts of kindness—offers hope. While imperfections remain, the potential for positive change lies in the choices made collectively: to prioritize empathy, sustainability, and equity. \n\nIn the end, mankind’s story is still being written. Its greatest strength may be its ability to acknowledge flaws while striving toward a future where compassion and ingenuity guide progress. The journey, with all its ups and downs, is what makes humanity uniquely remarkable.", "reasoning_content": "The user asked me, \"Hi! What do you think about mankind?\" and specified to answer in English. First, I need to understand the user's intention. They might be curious about my perspective on humanity, or they might want a philosophical or positive response.\n\nNext, I have to consider my identity as an AI. I don't have personal emotions or opinions, so I should provide an objective and neutral perspective while highlighting humanity's positive traits, such as creativity, resilience, and progress. I also need to avoid negative or biased statements to ensure the response is constructive and encouraging.\n\nThen, I should structure the response. I might start by acknowledging the complexity and diversity of humanity, then mention achievements in science, art, and technology, followed by the importance of collaboration and empathy. Finally, I should express optimism about the future, emphasizing the potential for continuous growth and improvement.\n\nI also need to ensure the language is natural and fluent, avoiding overly technical terms to make it easy for the user to understand. Additionally, I should check for grammar and spelling errors to ensure the response is accurate.\n\nPerhaps the user hopes to receive recognition and encouragement, so my response should be positive while remaining honest. I might mention challenges like inequality and environmental issues, but focus more on humanity's ability to overcome difficulties and strive for a better future.\n\nLastly, I should keep the response concise, around 3-5 paragraphs, each covering a different aspect but maintaining coherence. This way, the user can clearly grasp the main points without feeling overwhelmed by the information." }, "finish_reason": "stop", "flag": 0 } ], "usage": { "prompt_tokens": 13, "completion_tokens": 601, "total_tokens": 614, "completion_tokens_details": { "reasoning_tokens": 323 } }, "meta": { "usage": { "credits_used": 391 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-x1.1-preview.md # ernie-x1.1-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `baidu/ernie-x1-1-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["baidu/ernie-x1-1-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"baidu/ernie-x1-1-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"baidu/ernie-x1-1-preview", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'baidu/ernie-x1-1-preview', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "as-0ik8v930zi", "object": "chat.completion", "created": 1768940870, "model": "ernie-x1.1-preview", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Mankind is a fascinating subject to reflect upon. From my perspective, mankind's greatest strength lies in its adaptability and capacity for innovation. Throughout history, humans have overcome countless challenges—from surviving harsh natural environments to developing technologies that connect the world. This resilience and creativity are truly remarkable.\n\nHowever, mankind also faces significant challenges. Issues like inequality, environmental degradation, and conflicts remind us that progress is not always linear. These problems require collective effort and wisdom to solve. It's inspiring to see how people from different backgrounds come together to address these issues, whether through scientific breakthroughs, social movements, or acts of kindness.\n\nAnother aspect worth noting is mankind's emotional depth. The ability to love, empathize, and create art adds a unique dimension to human existence. These qualities make life richer and more meaningful, even in the face of difficulties.\n\nIn summary, mankind is a complex and dynamic entity. It's a blend of strengths and weaknesses, progress and setbacks. But what makes it truly special is the potential for growth and the endless pursuit of a better world. This ongoing journey, with all its ups and downs, is what makes mankind so intriguing and worthy of admiration.", "reasoning_content": "" }, "finish_reason": "stop", "flag": 0 } ], "usage": { "prompt_tokens": 13, "completion_tokens": 248, "total_tokens": 261 }, "meta": { "usage": { "credits_used": 332 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/errors-and-messages/errors-with-status-code-4xx.md # Errors with status code 4xx ## Error c**lass overview** These are client-side errors returned by the AIML API when something is wrong with the request rather than with the server. The list below summarizes what each status code means. * **400 Bad Request** — The request contains invalid or missing parameters. * **401 Unauthorized** — The request is missing or uses an invalid API key. * **403 Forbidden** — The request is authenticated but not allowed (e.g., no credits). * **404 Not Found** — The requested endpoint or resource does not exist. * **429 Too Many Requests** — You sent too many requests and hit a rate limit. Detailed examples of error messages and explanations are provided in the sections below. ## The most frequently received messages The most frequently received messages in this class are shown below.\ We will gradually expand this list.
Status codeMessageExplanation
400"Body validation error!"Your request body contains invalid or missing fields. Check the API schema for the selected model. The full error message usually includes hints like "Expected" and "Received" to show which parameter caused the issue.
400"Unsupported value: 'messages[0].role' does not support 'system' with this model."The provided role is not supported by the selected model. Check the API schema for the list of allowed values for messages[].role and update your request accordingly.
403"You've run out of credits. Please top up your balance or update your payment method to continue: https://aimlapi.com/app/billing/"Your credits or plan limits have been exhausted. Top up your balance or update your payment method on the Billing page to continue using the API.
401"This request requires a valid API key. You can create a new API key on the Billing page: https://aimlapi.com/app/keys"The request is not authenticated. The API key is missing, expired, or invalid. Pass a valid Authorization: Bearer <API_KEY> header from the Keys page in your dashboard.
404-The requested endpoint or resource does not exist. Check the base URL, path (for example /v1/chat/completions), and HTTP method used in your request.
429"Too Many Requests"You have hit a rate or concurrency limit by sending too many requests in a short period of time. Reduce the request rate, add retries with backoff, or queue requests before calling the API again.
### Example #1: Body validation error Below is an example of a 400 Bad Request with the generic "Body validation error" message.\ The API adds more details after this line (for example, Invalid enum value, Expected ..., Received ...).\ Use these hints to see which field was wrong and how to fix your request. {% code overflow="wrap" %} ```python Body validation error Invalid enum value. Expected 'kling-video/v1/standard/image-to-video' | 'kling-video/v1/pro/image-to-video' | 'kling-video/v1.6/standard/image-to-video' | 'kling-video/v1.6/pro/image-to-video', received 'an orange mushroom sitting on top of a tree stump in the woods' ``` {% endcode %} --- # Source: https://docs.aimlapi.com/errors-and-messages/errors-with-status-code-5xx.md # Errors with status code 5xx These codes indicate issues on the server side. {% hint style="success" %} This may mean an issue on our side or on the AI model provider's side. Try making the call again in a few minutes. If the problem persists, contact our [support team](https://help.aimlapi.com/), and we will investigate. {% endhint %} * **500 Internal Server Error** — An unexpected error occurred on the server. * **502 Bad Gateway** — A downstream service or partner API returned an invalid response. * **503 Service Unavailable** — The AI model or a partner service is temporarily unavailable. * **504 Gateway Timeout** — The generation did not finish within the allowed time limit. ## The most frequently received messages The most frequently received messages in this class are shown below.\ We will gradually expand this list.
Status codeMessageExplanation
500"Internal server error"An unexpected error occurred on our side while processing your request. Retry the call; if the problem persists, contact our support team.
500"Something wrong with the server. Please, try again later."A third-party service (including a partner model API) returned an error while processing your request. Your request is valid — try again a bit later or contact support if the problem persists.
500"Coupon code not found"The coupon / promo code you entered does not exist or is no longer valid. Check the code for typos or use a different coupon.
500"The model is under maintenance, please contact us at help@aimlapi.com."Your Request ID is "*". An unexpected error occurred in one of the services handling your request. Please try again later and, if the issue persists, contact our support team and include the Request ID.
502"Bad Gateway`'A downstream service or partner API returned an invalid response.
503"Service unavailable"For some technical reason, the AI model or a partner service could not complete your request. Please try again later or contact our support team if the issue persists.
504"Generation timeout"A downstream service or partner API returned an invalid response.
--- # Source: https://docs.aimlapi.com/api-references/video-models/veed/fabric-1.0-fast.md # fabric-1.0-fast {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `veed/fabric-1.0-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This is an image-to-video model that transforms any image into a realistic talking video. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"VideoControllerV2_submitVideo_v2","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["veed/fabric-1.0-fast"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"audio_url":{"type":"string","format":"uri","description":"Reference song, should contain music and vocals. Must be a .wav or .mp3 file longer than 15 seconds."},"resolution":{"type":"string","enum":["480p","720p"],"description":"The resolution of the generated video. Available options are 480p, and 720p."}},"required":["model","image_url","audio_url","resolution"]}}}},"responses":{"201":{"description":""}},"tags":["Video Models"]}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation takes about 55 seconds for a 5-second 480p video and around 1 minute 25 seconds for 720p. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "veed/fabric-1.0-fast", "image_url": "https://v3.fal.media/files/koala/NLVPfOI4XL1cWT2PmmqT3_Hope.png", "audio_url": "https://v3.fal.media/files/elephant/Oz_g4AwQvXtXpUHL3Pa7u_Hope.mp3", "resolution": "720p" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: 'veed/fabric-1.0-fast', image_url: 'https://v3.fal.media/files/koala/NLVPfOI4XL1cWT2PmmqT3_Hope.png', audio_url: 'https://v3.fal.media/files/elephant/Oz_g4AwQvXtXpUHL3Pa7u_Hope.mp3', resolution: '720p' }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'd7c67219-2cd8-4bed-9c3c-960c17eb4c2d:veed/fabric-1.0-fast', 'status': 'queued', 'meta': {'usage': {'tokens_used': 3150000}}} Generation ID: 1fe4344e-3d44-4bf8-9f04-0ac4bb312eec:pixverse/v5/text-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"d7c67219-2cd8-4bed-9c3c-960c17eb4c2d:veed/fabric-1.0-fast","status":"completed","video":{"url":"https://v3b.fal.media/files/b/monkey/P9C2_0yfMZxn68-HPgKNX_tmp5g5n20s9.mp4"}} ``` {% endcode %}
**Processing time**: \~1 min 20 sec. --- # Source: https://docs.aimlapi.com/api-references/video-models/veed/fabric-1.0.md # fabric-1.0 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `veed/fabric-1.0` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This is an image-to-video model that transforms any image into a realistic talking video. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["veed/fabric-1.0"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"audio_url":{"type":"string","format":"uri","description":"The URL of the audio file for lip-sync animation. The model detects spoken parts and syncs the character's mouth to them. Audio must be under 30s long."},"resolution":{"type":"string","enum":["480p","720p"],"default":"480p","description":"The resolution of the generated video."}},"required":["model","image_url","audio_url"],"title":"veed/fabric-1.0"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation takes about 1 minute 25 seconds for a 5-second 480p video and around 1 minute 55 seconds for 720p. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "veed/fabric-1.0", "image_url": "https://v3.fal.media/files/koala/NLVPfOI4XL1cWT2PmmqT3_Hope.png", "audio_url": "https://v3.fal.media/files/elephant/Oz_g4AwQvXtXpUHL3Pa7u_Hope.mp3", "resolution": "720p" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: 'veed/fabric-1.0', image_url: 'https://v3.fal.media/files/koala/NLVPfOI4XL1cWT2PmmqT3_Hope.png', audio_url: 'https://v3.fal.media/files/elephant/Oz_g4AwQvXtXpUHL3Pa7u_Hope.mp3', resolution: '720p' }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'd7c67219-2cd8-4bed-9c3c-960c17eb4c2d:veed/fabric-1.0', 'status': 'queued', 'meta': {'usage': {'tokens_used': 3150000}}} Generation ID: 1fe4344e-3d44-4bf8-9f04-0ac4bb312eec:pixverse/v5/text-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"d7c67219-2cd8-4bed-9c3c-960c17eb4c2d:veed/fabric-1.0","status":"completed","video":{"url":"https://v3b.fal.media/files/b/monkey/P9C2_0yfMZxn68-HPgKNX_tmp5g5n20s9.mp4"}} ``` {% endcode %}
**Processing time**: \~1 min 50 sec. --- # Source: https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-a-local-map.md # Find a Local Map ## Overview This is a description of one of the six use cases for the AI Search Engine—retrieving a Google Maps link, a small picture of the map and coordinates of the requested place based on information from the internet. **An output example**: {% code overflow="wrap" %} ```json { "link": "https://www.google.com/maps/place/San+Francisco,+CA/data=!4m2!3m1!1s0x80859a6d00690021:0x4a501367f076adff?sa=X&ved=2ahUKEwjqg7eNz9KLAxVCFFkFHWSPEeIQ8gF6BAgqEAA&hl=en", "image": "https://dmwtgq8yidg0m.cloudfront.net/images/TdNFUpcEvvHL-local-map.webp" } ``` {% endcode %} {% hint style="info" %} The output will be the requested information retrieved from the internet—or empty brackets `{}` if nothing was found or if the entered query does not match the selected search type (for example, entering something like "wofujwofifwuowijufi"). {% endhint %} ## How to make a call Check how this call is made in the [example ](#example)below. {% hint style="success" %} Note that queries can include advanced search syntax: * **Search for an exact match:** Enter a word or phrase using `\"` before and after it.\ For example, `\"tallest building\"`. * **Search for a specific site:** Enter `site:` in front of a site or domain. For example, `site:youtube.com cat videos`. * **Exclude words from your search:** Enter `-` in front of a word that you want to leave out. For example, `jaguar speed -car`. {% endhint %} ## API Schema ## GET /v1/bagoodex/local-map > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Bagoodex.v1.FetchLocalMapResponseDTO":{"type":"object","properties":{"link":{"type":"string","nullable":true},"image":{"type":"string","nullable":true,"format":"uri"},"gps_coordinates":{"type":"object","nullable":true,"properties":{"latitude":{"type":"number"},"longitude":{"type":"number"}},"required":["latitude","longitude"]}}}}},"paths":{"/v1/bagoodex/local-map":{"get":{"operationId":"BagoodexControllerV1_fetchLocalMap_v1","parameters":[{"name":"followup_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"default":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Bagoodex.v1.FetchLocalMapResponseDTO"}}}}},"tags":["Bagoodex"]}}}} ``` ## Example First, the standard chat completion endpoint with your query is called. It returns an ID, which must then be passed as the sole input parameter `followup_id` to the specific second endpoint: {% code overflow="wrap" %} ```python import requests from openai import OpenAI # Insert your AIML API Key instead of : API_KEY = '' API_URL = 'https://api.aimlapi.com' # Call the standart chat completion endpoint to get an ID def complete_chat(): client = OpenAI( base_url=API_URL, api_key=API_KEY, ) response = client.chat.completions.create( model="bagoodex/bagoodex-search-v1", messages=[ { "role": "user", "content": "where is san francisco", }, ], ) # Extract the ID from the response gen_id = response.id print(f"Generated ID: {gen_id}") # Call the second endpoint with the generated ID get_local_map(gen_id) def get_local_map(gen_id): params = {'followup_id': gen_id} headers = {'Authorization': f'Bearer {API_KEY}'} response = requests.get(f'{API_URL}/v1/bagoodex/local-map', headers=headers, params=params) print(response.json()) # Run the function complete_chat() ``` {% endcode %} **Model Response**: {% code overflow="wrap" %} ```json { "link": "https://www.google.com/maps/place/San+Francisco,+CA/data=!4m2!3m1!1s0x80859a6d00690021:0x4a501367f076adff?sa=X&ved=2ahUKEwjqg7eNz9KLAxVCFFkFHWSPEeIQ8gF6BAgqEAA&hl=en", "image": "https://dmwtgq8yidg0m.cloudfront.net/images/TdNFUpcEvvHL-local-map.webp" } ``` {% endcode %} --- # Source: https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-images.md # Find Images ## Overview This is a description of one of the six use cases for the AI Search Engine model—retrieving internet images related to the requested subject.
An output example Request: *"giant dragonflies"* Response: {% code overflow="wrap" %} ```json [ { "source": "", "original": "https://images.theconversation.com/files/234118/original/file-20180829-195319-1d4y13t.jpg?ixlib=rb-4.1.0&rect=0%2C7%2C1200%2C790&q=45&auto=format&w=926&fit=clip", "title": "Paleozoic era's giant dragonflies ...", "source_name": "The Conversation" }, { "source": "", "original": "https://s3-us-west-1.amazonaws.com/scifindr/articles/image3s/000/002/727/large/meganeuropsis-eating-roach_lucas-lima_3x4.jpg?1470033295", "title": "huge dragonfly ...", "source_name": "Earth Archives" }, { "source": "", "original": "https://s3-us-west-1.amazonaws.com/scifindr/articles/image2s/000/002/727/large/meganeuropsis_lucas-lima_4x3.jpg?1470033293", "title": "huge dragonfly ...", "source_name": "Earth Archives" }, { "source": "", "original": "https://static.wikia.nocookie.net/prehistoricparkip/images/3/37/Meganeurid_bbc_prehistoric_.jpg/revision/latest?cb=20120906182204", "title": "Giant Dragonfly | Prehistoric Park Wiki ...", "source_name": "Prehistoric Park Wiki - Fandom" }, { "source": "", "original": "https://i.redd.it/rig989kttmc71.jpg", "title": "This pretty large dragonfly we found ...", "source_name": "Reddit" }, { "source": "", "original": "https://upload.wikimedia.org/wikipedia/commons/f/fc/Meganeurites_gracilipes_restoration.webp", "title": "Meganisoptera - Wikipedia", "source_name": "Wikipedia" }, { "source": "", "original": "https://upload.wikimedia.org/wikipedia/commons/3/31/Meganeuramodell.jpg", "title": "Ancient Dragonflies Were Huge, Larger ...", "source_name": "Roaring Earth -" }, { "source": "", "original": "https://sites.wustl.edu/monh/files/2019/12/woman-and-meganeura-350x263.jpeg", "title": "Dragonflies and Damselflies of Missouri ...", "source_name": "Washington University" }, { "source": "", "original": "https://static.sciencelearn.org.nz/images/images/000/004/172/original/INSECTS_ITV_Image_map_Aquatic_insects_Dragonfly.jpg?1674173331", "title": "Bush giant dragonfly — Science ...", "source_name": "Science Learning Hub" }, { "source": "", "original": "http://www.stancsmith.com/uploads/4/8/9/6/48964465/meganeuropsis-giantdragonfly_orig.jpg", "title": "Ginormous Dragonfly - Stan C ...", "source_name": "Stan C. Smith" } ] ``` {% endcode %}
{% hint style="info" %} The output will be the requested information retrieved from the internet—or empty brackets `[]` if nothing was found or if the entered query does not match the selected search type (for example, entering 'owtjtwjtwjtwojo' instead of a valid image-related subject). Individual fields for which no information was found are also returned empty. {% endhint %} ## How to make a call Check how this call is made in the [example ](#example)below. {% hint style="success" %} Note that queries can include advanced search syntax: * **Search for an exact match:** Enter a word or phrase using `\"` before and after it.\ For example, `\"tallest building\"`. * **Search for a specific site:** Enter `site:` in front of a site or domain. For example, `site:youtube.com cat videos`. * **Exclude words from your search:** Enter `-` in front of a word that you want to leave out. For example, `jaguar speed -car`. {% endhint %} ## API Schema ## GET /v1/bagoodex/images > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Bagoodex.v1.FetchImagesResponseDTO":{"type":"array","items":{"type":"object","properties":{"source":{"type":"string","nullable":true},"original":{"type":"string","nullable":true,"format":"uri"},"title":{"type":"string","nullable":true},"source_name":{"type":"string","nullable":true}}}}}},"paths":{"/v1/bagoodex/images":{"get":{"operationId":"BagoodexControllerV1_fetchImages_v1","parameters":[{"name":"followup_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"default":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Bagoodex.v1.FetchImagesResponseDTO"}}}}},"tags":["Bagoodex"]}}}} ``` ## Example First, the standard chat completion endpoint with your query is called. It returns an ID, which must then be passed as the sole input parameter `followup_id` to the specific second endpoint: {% code overflow="wrap" %} ```python import requests from openai import OpenAI # Insert your AIML API Key instead of : API_KEY = '' API_URL = 'https://api.aimlapi.com' # Call the standart chat completion endpoint to get an ID def complete_chat(): client = OpenAI( base_url=API_URL, api_key=API_KEY, ) response = client.chat.completions.create( model="bagoodex/bagoodex-search-v1", messages=[ { "role": "user", "content": "giant dragonflies", }, ], ) # Extract the ID from the response gen_id = response.id print(f"Generated ID: {gen_id}") # Call the second endpoint with the generated ID get_images(gen_id) def get_images(gen_id): params = {'followup_id': gen_id} headers = {'Authorization': f'Bearer {API_KEY}'} response = requests.get(f'{API_URL}/v1/bagoodex/images', headers=headers, params=params) print(response.json()) # Run the function complete_chat() ``` {% endcode %}
Model Response {% code overflow="wrap" %} ```json [ { "source": "", "original": "https://images.theconversation.com/files/234118/original/file-20180829-195319-1d4y13t.jpg?ixlib=rb-4.1.0&rect=0%2C7%2C1200%2C790&q=45&auto=format&w=926&fit=clip", "title": "Paleozoic era's giant dragonflies ...", "source_name": "The Conversation" }, { "source": "", "original": "https://s3-us-west-1.amazonaws.com/scifindr/articles/image3s/000/002/727/large/meganeuropsis-eating-roach_lucas-lima_3x4.jpg?1470033295", "title": "huge dragonfly ...", "source_name": "Earth Archives" }, { "source": "", "original": "https://s3-us-west-1.amazonaws.com/scifindr/articles/image2s/000/002/727/large/meganeuropsis_lucas-lima_4x3.jpg?1470033293", "title": "huge dragonfly ...", "source_name": "Earth Archives" }, { "source": "", "original": "https://static.wikia.nocookie.net/prehistoricparkip/images/3/37/Meganeurid_bbc_prehistoric_.jpg/revision/latest?cb=20120906182204", "title": "Giant Dragonfly | Prehistoric Park Wiki ...", "source_name": "Prehistoric Park Wiki - Fandom" }, { "source": "", "original": "https://i.redd.it/rig989kttmc71.jpg", "title": "This pretty large dragonfly we found ...", "source_name": "Reddit" }, { "source": "", "original": "https://upload.wikimedia.org/wikipedia/commons/f/fc/Meganeurites_gracilipes_restoration.webp", "title": "Meganisoptera - Wikipedia", "source_name": "Wikipedia" }, { "source": "", "original": "https://upload.wikimedia.org/wikipedia/commons/3/31/Meganeuramodell.jpg", "title": "Ancient Dragonflies Were Huge, Larger ...", "source_name": "Roaring Earth -" }, { "source": "", "original": "https://sites.wustl.edu/monh/files/2019/12/woman-and-meganeura-350x263.jpeg", "title": "Dragonflies and Damselflies of Missouri ...", "source_name": "Washington University" }, { "source": "", "original": "https://static.sciencelearn.org.nz/images/images/000/004/172/original/INSECTS_ITV_Image_map_Aquatic_insects_Dragonfly.jpg?1674173331", "title": "Bush giant dragonfly — Science ...", "source_name": "Science Learning Hub" }, { "source": "", "original": "http://www.stancsmith.com/uploads/4/8/9/6/48964465/meganeuropsis-giantdragonfly_orig.jpg", "title": "Ginormous Dragonfly - Stan C ...", "source_name": "Stan C. Smith" } ] ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-links.md # Find Links ## Overview This is a description of one of the six use cases for this AI Search Engine—retrieving internet links related to the requested subject.
An output example Request: *"*site:[www.reddit.com](http://www.reddit.com) AI*"* Response: {% code overflow="wrap" %} ```json [ "https://www.reddit.com/r/artificial/", "https://www.reddit.com/r/ArtificialInteligence/", "https://www.reddit.com/r/artificial/wiki/getting-started/", "https://www.reddit.com/r/ChatGPT/comments/1fwt2zf/it_is_officially_over_these_are_all_ai/", "https://www.reddit.com/r/ArtificialInteligence/comments/1f8wxe7/whats_the_most_surprising_way_ai_has_become_part/", "https://gist.github.com/nndda/a985daed53283a2c7fd399e11a185b11", "https://www.reddit.com/r/aivideo/", "https://www.reddit.com/r/singularity/", "https://www.abc.net.au/", "https://www.reddit.com/r/PromptEngineering/" ] ``` {% endcode %}
{% hint style="info" %} The output will be the requested information retrieved from the internet—or empty brackets `[]` if nothing was found or if the entered query does not match the selected search type (for example, entering `'owtjtwjtwjtwojo'` instead of a valid subject). {% endhint %} ## How to make a call Check how this call is made in the [example ](#example)below. {% hint style="success" %} Note that queries can include advanced search syntax: * **Search for an exact match:** Enter a word or phrase using `\"` before and after it.\ For example, `\"tallest building\"`. * **Search for a specific site:** Enter `site:` in front of a site or domain.\ For example, `site:youtube.com cat videos`. * **Exclude words from your search:** Enter `-` in front of a word that you want to leave out.\ For example, `jaguar speed -car`. {% endhint %} ## API Schema ## GET /v1/bagoodex/links > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Bagoodex.v1.FetchLinksResponseDTO":{"type":"array","items":{"type":"string","format":"uri"}}}},"paths":{"/v1/bagoodex/links":{"get":{"operationId":"BagoodexControllerV1_fetchLinks_v1","parameters":[{"name":"followup_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"default":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Bagoodex.v1.FetchLinksResponseDTO"}}}}},"tags":["Bagoodex"]}}}} ``` ## Example First, the standard chat completion endpoint with your query is called. It returns an ID, which must then be passed as the sole input parameter `followup_id` to the specific second endpoint: {% code overflow="wrap" %} ```python import requests from openai import OpenAI # Insert your AIML API Key instead of : API_KEY = '' API_URL = 'https://api.aimlapi.com' # Call the standart chat completion endpoint to get an ID def complete_chat(): client = OpenAI( base_url=API_URL, api_key=API_KEY, ) response = client.chat.completions.create( model="bagoodex/bagoodex-search-v1", messages=[ { "role": "user", "content": "site:www.reddit.com AI", }, ], ) # Extract the ID from the response gen_id = response.id print(f"Generated ID: {gen_id}") # Call the second endpoint with the generated ID get_links(gen_id) def get_links(gen_id): params = {'followup_id': gen_id} headers = {'Authorization': f'Bearer {API_KEY}'} response = requests.get(f'{API_URL}/v1/bagoodex/links', headers=headers, params=params) print(response.json()) # Run the function complete_chat() ``` {% endcode %} **Model Response**: {% code overflow="wrap" %} ```json [ "https://www.reddit.com/r/artificial/", "https://www.reddit.com/r/ArtificialInteligence/", "https://www.reddit.com/r/artificial/wiki/getting-started/", "https://www.reddit.com/r/ChatGPT/comments/1fwt2zf/it_is_officially_over_these_are_all_ai/", "https://www.reddit.com/r/ArtificialInteligence/comments/1f8wxe7/whats_the_most_surprising_way_ai_has_become_part/", "https://gist.github.com/nndda/a985daed53283a2c7fd399e11a185b11", "https://www.reddit.com/r/aivideo/", "https://www.reddit.com/r/singularity/", "https://www.abc.net.au/", "https://www.reddit.com/r/PromptEngineering/" ] ``` {% endcode %} --- # Source: https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings.md # Find Relevant Answers: Semantic Search with Text Embeddings ## Idea and Step-by-Step Plan Today, we are going to use [text embeddings](https://docs.aimlapi.com/api-references/embedding-models) to transform a list of phrases into vectors. When a user asks a question, we will convert it into a vector as well and find the phrases from the list that are semantically closest. This approach is useful, for example, to immediately suggest relevant FAQ sections to the user and reduce the need for full support requests. So, here's a plan: 1. **Prepare the data:** Create a numbered list of text phrases. 2. **Generate embeddings:** Use a model to embed each phrase into a vector. 3. **Embed the question:** When the user asks something, embed the question text. 4. **Find similar phrases:** Calculate the similarity (e.g., cosine similarity) between the question vector and the list vectors. Show the top 1–3 most similar phrases as the answer. ## Full Walkthrough ### 1. Prepare the data We have compiled the following list of FAQ headings: ``` "How to grow tomatoes at home", "Learning about birds", "Best practices for machine learning models", "How to train a dog", "Tips for painting landscapes", "Learning Python for data analysis", "Everyday Life of a Cynologist" ``` ### 2. Generate embeddings Let's save our headings as a list and pass them to the model. We chose the [text-embedding-3-large](https://docs.aimlapi.com/api-references/embedding-models/openai/text-embedding-3-large) model — it has been trained on a large dataset and is powerful enough to build complex semantic connections. Now each of our headings has a corresponding embedding vector. ### 3. Embed the question Similarly, we process the user's query. We save the embedding vector generated by the model into a separate variable. ### 4. Find similar phrases We calculate the similarity between the question vector and the list vectors. There are different metrics and functions you can use for this, such as cosine similarity, dot product, or Euclidean distance. In this example, we use cosine similarity because it measures the angle between two vectors and is a popular choice for comparing text embeddings, especially when the magnitude of the vectors is less important than their direction. Please note that to use the cosine similarity function, you need to install the [scikit-learn](https://pypi.org/project/scikit-learn/) library separately. You can install it with the following command: ```shell pip install scikit-learn ``` ## Full Code Example & Results In this section, you will find the complete Python code for the described use case, along with an example of the program's output. {% hint style="success" %} Do not forget to replace `` with your actual AI/ML API key from [your account](https://aimlapi.com/app/keys) on our platform. {% endhint %}
Python code {% code overflow="wrap" %} ```python import numpy as np from openai import OpenAI from sklearn.metrics.pairwise import cosine_similarity # Initialize the API client client = OpenAI( base_url="https://api.aimlapi.com/v2", api_key="", ) # Example list of headings items = [ "How to grow tomatoes at home", "Learning about birds", "Best practices for machine learning models", "How to train a dog", "Tips for painting landscapes", "Learning Python for data analysis", "Everyday Life of a Cynologist" ] # Generate embeddings for each phrase in the list response = client.embeddings.create( model="text-embedding-3-large", # Choose your fighter :) input=items ) item_embeddings = np.array([e.embedding for e in response.data]) # When a user submits a question query = "How to teach pets new tricks?" # Generate an embedding for the user's question query_response = client.embeddings.create( model="text-embedding-3-large", input=[query] ) query_embedding = np.array(query_response.data[0].embedding) # Calculate cosine similarity between the user question and each phrase similarities = cosine_similarity([query_embedding], item_embeddings)[0] # Find the indices of the most similar phrases top_indices = similarities.argsort()[::-1] # Sort in descending order print("Query:", query) print("\nMost similar items:") for idx in top_indices[:3]: # Show the top 3 most similar phrases print(f"- {items[idx]} (similarity: {similarities[idx]:.3f})") ``` {% endcode %}
Response when using a large embedding model {% code overflow="wrap" %} ```json5 Query: How to teach pets new tricks? Most similar items: - How to train a dog (similarity: 0.590) - Everyday Life of a Cynologist (similarity: 0.281) - Learning about birds (similarity: 0.255) ``` {% endcode %}
Here is the program output after we switched to the small version of the model, [text-embedding-3-small](https://docs.aimlapi.com/api-references/embedding-models/openai/text-embedding-3-small):
Response when using a small embedding model {% code overflow="wrap" %} ```json5 Query: How to teach pets new tricks? Most similar items: - How to train a dog (similarity: 0.534) - Learning about birds (similarity: 0.322) - Tips for painting landscapes (similarity: 0.244) ``` {% endcode %}
Maybe it just wasn’t trained quite as thoroughly and doesn’t recognize who cynologists are :person\_shrugging:\ Or maybe the difference is simply that the default embedding size is 1536 for `text-embedding-3-small` or 3072 for `text-embedding-3-large`. We didn't notice much difference in speed, but the larger version is somewhat more expensive. {% hint style="info" %} If you're planning to perform semantic search over code snippets, a better choice might be the [voyage-code-2](https://docs.aimlapi.com/api-references/embedding-models/anthropic/voyage-code-2) model, which is specifically trained to better distinguish between pieces of code. {% endhint %} ## Room for Improvement Naturally, this is a simplified example. You can develop a more comprehensive implementation by introducing features such as: * **Add a minimum similarity threshold** to filter out irrelevant results, * **Cache embeddings** for faster lookup without recalculating them each time, * **Allow partial matches** or fuzzy search for broader results, * **Handle multiple user questions at once** (batch processing) — and more. --- # Source: https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-the-weather.md # Find the Weather ## Overview This is a description of one of the six use cases for the AI Search Engine—retrieving a weather forecast for the requested location based on information from the internet. Provides only an 8-day weather forecast (daily and hourly).
An output example (a fragment) {% code overflow="wrap" %} ```json { "type": "weather_result", "temperature": "77", "unit": "Fahrenheit", "precipitation": "10%", "humidity": "61%", "wind": "6 mph", "location": "Delhi, India", "date": "Friday", "weather": "Partly cloudy", "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdc545ed9c61f19c827f0c61fdfb6829e6.png", "forecast": [ { "day": "Thursday", "temperature": { "high": "79", "low": "53" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea29a6d5d11c931cb1c6b8961c5a701ac4a.png", "weather": "Light rain", "humidity": "94%", "precipitation": "45%", "wind": "8 mph" }, { "day": "Friday", "temperature": { "high": "77", "low": "53" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea2db13b12386fa4894bed175a78b1a73d4.png", "weather": "Partly cloudy", "humidity": "61%", "precipitation": "10%", "wind": "6 mph" }, { "day": "Saturday", "temperature": { "high": "75", "low": "52" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea237f4cdb1738823c11db9661a9008b26b.png", "weather": "Partly cloudy", "humidity": "61%", "precipitation": "10%", "wind": "8 mph" }, { "day": "Sunday", "temperature": { "high": "78", "low": "51" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea231f7bed0f82344fc6f02aff2997c4fbf.png", "weather": "Sunny", "humidity": "57%", "precipitation": "0%", "wind": "7 mph" }, { "day": "Monday", "temperature": { "high": "81", "low": "54" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea2487fa0071c8c05c5d8cab80602121baf.png", "weather": "Mostly sunny", "humidity": "53%", "precipitation": "0%", "wind": "6 mph" }, { "day": "Tuesday", "temperature": { "high": "83", "low": "58" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea2672ffbd0b88fdead232eb139fe4be010.png", "weather": "Partly cloudy", "humidity": "52%", "precipitation": "10%", "wind": "7 mph" }, { "day": "Wednesday", "temperature": { "high": "89", "low": "64" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea24e9a609cde5258c4721caaca9f044f2b.png", "weather": "Mostly cloudy", "humidity": "40%", "precipitation": "10%", "wind": "5 mph" }, { "day": "Thursday", "temperature": { "high": "87", "low": "65" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea2bf3a11e7710bbb1110889fa0b00f8ffd.png", "weather": "Cloudy", "humidity": "46%", "precipitation": "10%", "wind": "7 mph" } ], "hourly_forecast": [ { "time": "Thursday 9:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "62", "precipitation": "5%", "humidity": "94%", "wind": "8 mph" }, { "time": "Thursday 10:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "61", "precipitation": "15%", "humidity": "96%", "wind": "8 mph" }, { "time": "Thursday 11:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny_s_cloudy.png", "weather": "Clear with periodic clouds", "temperature": "60", "precipitation": "15%", "humidity": "95%", "wind": "8 mph" }, { "time": "Friday 12:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny_s_cloudy.png", "weather": "Clear with periodic clouds", "temperature": "59", "precipitation": "0%", "humidity": "95%", "wind": "7 mph" }, { "time": "Friday 1:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny_s_cloudy.png", "weather": "Clear with periodic clouds", "temperature": "58", "precipitation": "0%", "humidity": "96%", "wind": "6 mph" }, { "time": "Friday 2:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "57", "precipitation": "0%", "humidity": "98%", "wind": "5 mph" }, { "time": "Friday 3:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "56", "precipitation": "0%", "humidity": "97%", "wind": "5 mph" }, { "time": "Friday 4:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "55", "precipitation": "0%", "humidity": "96%", "wind": "4 mph" }, { "time": "Friday 5:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "54", "precipitation": "0%", "humidity": "96%", "wind": "4 mph" }, { "time": "Friday 6:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "53", "precipitation": "0%", "humidity": "100%", "wind": "4 mph" }, { "time": "Friday 7:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "54", "precipitation": "0%", "humidity": "99%", "wind": "3 mph" }, { "time": "Friday 8:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "56", "precipitation": "0%", "humidity": "99%", "wind": "2 mph" }, { "time": "Friday 9:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "61", "precipitation": "0%", "humidity": "86%", "wind": "2 mph" }, { "time": "Friday 10:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "67", "precipitation": "0%", "humidity": "71%", "wind": "2 mph" }, { "time": "Friday 11:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "73", "precipitation": "0%", "humidity": "57%", "wind": "2 mph" }, { "time": "Friday 12:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "76", "precipitation": "0%", "humidity": "47%", "wind": "3 mph" }, { "time": "Friday 1:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "77", "precipitation": "5%", "humidity": "46%", "wind": "3 mph" }, { "time": "Friday 2:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Mostly cloudy", "temperature": "77", "precipitation": "10%", "humidity": "46%", "wind": "4 mph" }, { "time": "Friday 3:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "77", "precipitation": "5%", "humidity": "46%", "wind": "5 mph" }, { "time": "Friday 4:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "76", "precipitation": "5%", "humidity": "47%", "wind": "6 mph" }, { "time": "Friday 5:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Mostly sunny", "temperature": "75", "precipitation": "0%", "humidity": "52%", "wind": "6 mph" }, { "time": "Friday 6:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "71", "precipitation": "5%", "humidity": "60%", "wind": "6 mph" }, { "time": "Friday 7:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Mostly cloudy", "temperature": "67", "precipitation": "10%", "humidity": "72%", "wind": "6 mph" }, { "time": "Friday 8:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/cloudy.png", "weather": "Cloudy", "temperature": "63", "precipitation": "10%", "humidity": "84%", "wind": "6 mph" }, { "time": "Friday 9:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/cloudy.png", "weather": "Cloudy", "temperature": "61", "precipitation": "10%", "humidity": "91%", "wind": "6 mph" }, { "time": "Friday 10:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/cloudy.png", "weather": "Cloudy", "temperature": "60", "precipitation": "10%", "humidity": "95%", "wind": "6 mph" }, { "time": "Friday 11:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/cloudy.png", "weather": "Cloudy", "temperature": "59", "precipitation": "10%", "humidity": "96%", "wind": "6 mph" }, ``` {% endcode %}
{% hint style="info" %} The output will be the requested information retrieved from the internet—or empty brackets `{}` if nothing was found or if the entered query does not match the selected search type (for example, querying 'How to get to Mars?' instead of requesting a weather forecast for a specific location). {% endhint %} ## How to make a call Check how this call is made in the [example ](#example)below. {% hint style="success" %} Note that queries can include advanced search syntax: * **Search for an exact match:** Enter a word or phrase using `\"` before and after it.\ For example, `\"tallest building\"`. * **Search for a specific site:** Enter `site:` in front of a site or domain. For example, `site:youtube.com cat videos`. * **Exclude words from your search:** Enter `-` in front of a word that you want to leave out. For example, `jaguar speed -car`. {% endhint %} ## API Schema ## GET /v1/bagoodex/weather > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Bagoodex.v1.FetchWeatherResponseDTO":{"type":"object","properties":{"type":{"type":"string","nullable":true},"temperature":{"type":"string","nullable":true},"unit":{"type":"string","nullable":true},"precipitation":{"type":"string","nullable":true},"humidity":{"type":"string","nullable":true},"wind":{"type":"string","nullable":true},"location":{"type":"string","nullable":true},"date":{"type":"string","nullable":true},"weather":{"type":"string","nullable":true},"thumbnail":{"type":"string","nullable":true,"format":"uri"},"forecast":{"type":"array","nullable":true,"items":{"type":"object","properties":{"day":{"type":"string"},"temperature":{"type":"object","properties":{"high":{"type":"string"},"low":{"type":"string"}},"required":["high","low"]},"thumbnail":{"type":"string","format":"uri"},"weather":{"type":"string"},"humidity":{"type":"string"},"precipitation":{"type":"string"},"wind":{"type":"string"}},"required":["day","temperature","thumbnail","weather","humidity","precipitation","wind"]}},"hourly_forecast":{"type":"array","nullable":true,"items":{"type":"object","properties":{"time":{"type":"string"},"thumbnail":{"type":"string","format":"uri"},"weather":{"type":"string"},"temperature":{"type":"string"},"precipitation":{"type":"string"},"humidity":{"type":"string"},"wind":{"type":"string"}},"required":["time","thumbnail","weather","temperature","precipitation","humidity","wind"]}},"precipitation_forecast":{"type":"array","nullable":true,"items":{"type":"object","properties":{"precipitation":{"type":"string"},"day":{"type":"string"},"time":{"type":"string"}},"required":["precipitation","day","time"]}},"wind_forecast":{"type":"array","nullable":true,"items":{"type":"object","properties":{"angle":{"type":"number"},"direction":{"type":"string"},"speed":{"type":"string"},"time":{"type":"string"}},"required":["angle","direction","speed","time"]}},"sources":{"type":"array","nullable":true,"items":{"type":"object","properties":{"title":{"type":"string"},"link":{"type":"string","format":"uri"}},"required":["title","link"]}}}}}},"paths":{"/v1/bagoodex/weather":{"get":{"operationId":"BagoodexControllerV1_fetchWeather_v1","parameters":[{"name":"followup_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"default":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Bagoodex.v1.FetchWeatherResponseDTO"}}}}},"tags":["Bagoodex"]}}}} ``` ## Example First, the standard chat completion endpoint with your query is called. It returns an ID, which must then be passed as the sole input parameter `followup_id` to the specific second endpoint: {% code overflow="wrap" %} ```python import requests from openai import OpenAI # Insert your AIML API Key instead of : API_KEY = '' API_URL = 'https://api.aimlapi.com' # Call the standart chat completion endpoint to get an ID def complete_chat(): client = OpenAI( base_url=API_URL, api_key=API_KEY, ) response = client.chat.completions.create( model="bagoodex/bagoodex-search-v1", messages=[ { "role": "user", "content": "Weather in Delhi tomorrow", }, ], ) # Extract the ID from the response gen_id = response.id print(f"Generated ID: {gen_id}") # Call the second endpoint with the generated ID get_weather(gen_id) def get_weather(gen_id): params = {'followup_id': gen_id} headers = {'Authorization': f'Bearer {API_KEY}'} response = requests.get(f'{API_URL}/v1/bagoodex/weather', headers=headers, params=params) print(response.json()) # Run the function complete_chat() ``` {% endcode %} **Model Response**:
An output fragment {% code overflow="wrap" %} ```json { "type": "weather_result", "temperature": "77", "unit": "Fahrenheit", "precipitation": "10%", "humidity": "61%", "wind": "6 mph", "location": "Delhi, India", "date": "Friday", "weather": "Partly cloudy", "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdc545ed9c61f19c827f0c61fdfb6829e6.png", "forecast": [ { "day": "Thursday", "temperature": { "high": "79", "low": "53" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea29a6d5d11c931cb1c6b8961c5a701ac4a.png", "weather": "Light rain", "humidity": "94%", "precipitation": "45%", "wind": "8 mph" }, { "day": "Friday", "temperature": { "high": "77", "low": "53" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea2db13b12386fa4894bed175a78b1a73d4.png", "weather": "Partly cloudy", "humidity": "61%", "precipitation": "10%", "wind": "6 mph" }, { "day": "Saturday", "temperature": { "high": "75", "low": "52" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea237f4cdb1738823c11db9661a9008b26b.png", "weather": "Partly cloudy", "humidity": "61%", "precipitation": "10%", "wind": "8 mph" }, { "day": "Sunday", "temperature": { "high": "78", "low": "51" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea231f7bed0f82344fc6f02aff2997c4fbf.png", "weather": "Sunny", "humidity": "57%", "precipitation": "0%", "wind": "7 mph" }, { "day": "Monday", "temperature": { "high": "81", "low": "54" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea2487fa0071c8c05c5d8cab80602121baf.png", "weather": "Mostly sunny", "humidity": "53%", "precipitation": "0%", "wind": "6 mph" }, { "day": "Tuesday", "temperature": { "high": "83", "low": "58" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea2672ffbd0b88fdead232eb139fe4be010.png", "weather": "Partly cloudy", "humidity": "52%", "precipitation": "10%", "wind": "7 mph" }, { "day": "Wednesday", "temperature": { "high": "89", "low": "64" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea24e9a609cde5258c4721caaca9f044f2b.png", "weather": "Mostly cloudy", "humidity": "40%", "precipitation": "10%", "wind": "5 mph" }, { "day": "Thursday", "temperature": { "high": "87", "low": "65" }, "thumbnail": "https://serpapi.com/searches/67b753af5f068c54e9730a02/images/bfaadf278c5af1fdfcd2f19c8fed7ea2bf3a11e7710bbb1110889fa0b00f8ffd.png", "weather": "Cloudy", "humidity": "46%", "precipitation": "10%", "wind": "7 mph" } ], "hourly_forecast": [ { "time": "Thursday 9:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "62", "precipitation": "5%", "humidity": "94%", "wind": "8 mph" }, { "time": "Thursday 10:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "61", "precipitation": "15%", "humidity": "96%", "wind": "8 mph" }, { "time": "Thursday 11:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny_s_cloudy.png", "weather": "Clear with periodic clouds", "temperature": "60", "precipitation": "15%", "humidity": "95%", "wind": "8 mph" }, { "time": "Friday 12:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny_s_cloudy.png", "weather": "Clear with periodic clouds", "temperature": "59", "precipitation": "0%", "humidity": "95%", "wind": "7 mph" }, { "time": "Friday 1:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny_s_cloudy.png", "weather": "Clear with periodic clouds", "temperature": "58", "precipitation": "0%", "humidity": "96%", "wind": "6 mph" }, { "time": "Friday 2:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "57", "precipitation": "0%", "humidity": "98%", "wind": "5 mph" }, { "time": "Friday 3:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "56", "precipitation": "0%", "humidity": "97%", "wind": "5 mph" }, { "time": "Friday 4:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "55", "precipitation": "0%", "humidity": "96%", "wind": "4 mph" }, { "time": "Friday 5:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "54", "precipitation": "0%", "humidity": "96%", "wind": "4 mph" }, { "time": "Friday 6:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Clear", "temperature": "53", "precipitation": "0%", "humidity": "100%", "wind": "4 mph" }, { "time": "Friday 7:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "54", "precipitation": "0%", "humidity": "99%", "wind": "3 mph" }, { "time": "Friday 8:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "56", "precipitation": "0%", "humidity": "99%", "wind": "2 mph" }, { "time": "Friday 9:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "61", "precipitation": "0%", "humidity": "86%", "wind": "2 mph" }, { "time": "Friday 10:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "67", "precipitation": "0%", "humidity": "71%", "wind": "2 mph" }, { "time": "Friday 11:00 AM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/sunny.png", "weather": "Sunny", "temperature": "73", "precipitation": "0%", "humidity": "57%", "wind": "2 mph" }, { "time": "Friday 12:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "76", "precipitation": "0%", "humidity": "47%", "wind": "3 mph" }, { "time": "Friday 1:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "77", "precipitation": "5%", "humidity": "46%", "wind": "3 mph" }, { "time": "Friday 2:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Mostly cloudy", "temperature": "77", "precipitation": "10%", "humidity": "46%", "wind": "4 mph" }, { "time": "Friday 3:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "77", "precipitation": "5%", "humidity": "46%", "wind": "5 mph" }, { "time": "Friday 4:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "76", "precipitation": "5%", "humidity": "47%", "wind": "6 mph" }, { "time": "Friday 5:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Mostly sunny", "temperature": "75", "precipitation": "0%", "humidity": "52%", "wind": "6 mph" }, { "time": "Friday 6:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Partly cloudy", "temperature": "71", "precipitation": "5%", "humidity": "60%", "wind": "6 mph" }, { "time": "Friday 7:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/partly_cloudy.png", "weather": "Mostly cloudy", "temperature": "67", "precipitation": "10%", "humidity": "72%", "wind": "6 mph" }, { "time": "Friday 8:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/cloudy.png", "weather": "Cloudy", "temperature": "63", "precipitation": "10%", "humidity": "84%", "wind": "6 mph" }, { "time": "Friday 9:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/cloudy.png", "weather": "Cloudy", "temperature": "61", "precipitation": "10%", "humidity": "91%", "wind": "6 mph" }, { "time": "Friday 10:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/cloudy.png", "weather": "Cloudy", "temperature": "60", "precipitation": "10%", "humidity": "95%", "wind": "6 mph" }, { "time": "Friday 11:00 PM", "thumbnail": "https://ssl.gstatic.com/onebox/weather/64/cloudy.png", "weather": "Cloudy", "temperature": "59", "precipitation": "10%", "humidity": "96%", "wind": "6 mph" }, ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/find-videos.md # Find Videos ## Overview This is a description of one of the six use cases for the AI Search Engine—retrieving internet videos related to the requested subject.
An output example Request: *"how to work with github"* Response: {% code overflow="wrap" %} ```json [ { "link": "https://www.youtube.com/watch?v=iv8rSLsi1xo", "thumbnail": "https://dmwtgq8yidg0m.cloudfront.net/medium/_cYAcql_-g0w-video-thumb.jpeg", "title": "GitHub Tutorial - Beginner's Training Guide" }, { "link": "https://www.youtube.com/watch?v=tRZGeaHPoaw", "thumbnail": "https://dmwtgq8yidg0m.cloudfront.net/medium/-bforsTVDxRQ-video-thumb.jpeg", "title": "Git and GitHub Tutorial for Beginners" } ] ``` {% endcode %}
{% hint style="info" %} The output will be the requested information retrieved from the internet—or empty brackets `[]` if nothing was found or if the entered query does not match the selected search type (for example, entering 'owtjtwjtwjtwojo' instead of a valid video-related subject). {% endhint %} ## How to make a call Check how this call is made in the [example ](#example)below. {% hint style="success" %} Note that queries can include advanced search syntax: * **Search for an exact match:** Enter a word or phrase using `\"` before and after it.\ For example, `\"tallest building\"`. * **Search for a specific site:** Enter `site:` in front of a site or domain. For example, `site:youtube.com cat videos`. * **Exclude words from your search:** Enter `-` in front of a word that you want to leave out. For example, `jaguar speed -car`. {% endhint %} ## API Schema ## GET /v1/bagoodex/videos > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Bagoodex.v1.FetchVideosResponseDTO":{"type":"array","items":{"type":"object","properties":{"link":{"type":"string","nullable":true,"format":"uri"},"thumbnail":{"type":"string","nullable":true,"format":"uri"},"title":{"type":"string","nullable":true}}}}}},"paths":{"/v1/bagoodex/videos":{"get":{"operationId":"BagoodexControllerV1_fetchVideo_v1","parameters":[{"name":"followup_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"default":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Bagoodex.v1.FetchVideosResponseDTO"}}}}},"tags":["Bagoodex"]}}}} ``` ## Example First, the standard chat completion endpoint with your query is called. It returns an ID, which must then be passed as the sole input parameter `followup_id` to the specific second endpoint: {% code overflow="wrap" %} ```python import requests from openai import OpenAI # Insert your AIML API Key instead of : API_KEY = '' API_URL = 'https://api.aimlapi.com' # Call the standart chat completion endpoint to get an ID def complete_chat(): client = OpenAI( base_url=API_URL, api_key=API_KEY, ) response = client.chat.completions.create( model="bagoodex/bagoodex-search-v1", messages=[ { "role": "user", "content": "how to work with github", }, ], ) # Extract the ID from the response gen_id = response.id print(f"Generated ID: {gen_id}") # Call this second endpoint with the generated ID get_videos(gen_id) def get_videos(gen_id): params = {'followup_id': gen_id} headers = {'Authorization': f'Bearer {API_KEY}'} response = requests.get(f'{API_URL}/v1/bagoodex/videos', headers=headers, params=params) print(response.json()) # Run the function complete_chat() ``` {% endcode %} **Model Response**: {% code overflow="wrap" %} ```json [ { "link": "https://www.youtube.com/watch?v=iv8rSLsi1xo", "thumbnail": "https://dmwtgq8yidg0m.cloudfront.net/medium/_cYAcql_-g0w-video-thumb.jpeg", "title": "GitHub Tutorial - Beginner's Training Guide" }, { "link": "https://www.youtube.com/watch?v=tRZGeaHPoaw", "thumbnail": "https://dmwtgq8yidg0m.cloudfront.net/medium/-bforsTVDxRQ-video-thumb.jpeg", "title": "Git and GitHub Tutorial for Beginners" } ] ``` {% endcode %} --- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-2-edit.md # flux-2-edit {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `blackforestlabs/flux-2-edit` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A 4MP photorealistic, production-grade editor with advanced multi-reference control capabilities. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["blackforestlabs/flux-2-edit"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":3,"description":"List of URLs or local Base64 encoded images to edit."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":2048,"default":1024},"height":{"type":"integer","minimum":512,"maximum":2048,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"enable_prompt_expansion":{"type":"boolean","default":true,"description":"If set to True, prompt will be upsampled with more details."},"guidance_scale":{"type":"number","minimum":0,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":4,"maximum":50,"description":"The number of inference steps to perform."},"acceleration":{"type":"string","enum":["none","regular","high"],"default":"regular","description":"The speed of the generation. The higher the speed, the faster the generation."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."}},"required":["model","prompt","image_urls"],"title":"blackforestlabs/flux-2-edit"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "blackforestlabs/flux-2-edit", "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ], "guidance_scale": 19 } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'blackforestlabs/flux-2-edit', prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', image_urls: [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ], guidance_scale: 19 }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a847ab7/KtavVJcI8jz5cN49wXF0e.png" } ], "meta": { "usage": { "tokens_used": 75601 } } } ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-2-lora-edit.md # flux-2-lora-edit {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `blackforestlabs/flux-2-lora-edit` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This image-to-image model enables you to apply your trained LoRA[^1] adapters, producing domain-specific outputs aligned with your brand aesthetic, expert content areas, or specialized visual constraints. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["blackforestlabs/flux-2-lora-edit"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":3,"description":"List of URLs or local Base64 encoded images to edit."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":2048,"default":1024},"height":{"type":"integer","minimum":512,"maximum":2048,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"enable_prompt_expansion":{"type":"boolean","default":true,"description":"If set to True, prompt will be upsampled with more details."},"guidance_scale":{"type":"number","minimum":0,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":4,"maximum":50,"description":"The number of inference steps to perform."},"acceleration":{"type":"string","enum":["none","regular","high"],"default":"regular","description":"The speed of the generation. The higher the speed, the faster the generation."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"loras":{"type":"array","items":{"type":"object","properties":{"path":{"type":"string","description":"URL, HuggingFace repo ID (owner/repo)."},"scale":{"type":"number","minimum":0,"maximum":4,"description":"Scale factor for LoRA application."}},"required":["path"]},"maxItems":3,"description":"List of LoRA weights to apply (maximum 3). Each LoRA can be a URL, HuggingFace repo ID, or local path."}},"required":["model","prompt","image_urls"],"title":"blackforestlabs/flux-2-lora-edit"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "blackforestlabs/flux-2-lora-edit", "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ], "guidance_scale": 19 } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'blackforestlabs/flux-2-lora-edit', prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', image_urls: [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ], guidance_scale: 19 }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a847b4c/2-TzrE2bwvfldwD4O3aVi.png" } ], "meta": { "usage": { "tokens_used": 132300 } } } ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

[^1]: The **LoRA algorithm** (Low-Rank Adaptation) is a parameter-efficient fine-tuning technique used to adapt large language models (LLMs) and stable diffusion models to new tasks or domains without retraining the entire model. This process is faster and requires significantly less memory and computational resources than full fine-tuning. --- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-2-lora.md # flux-2-lora {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `blackforestlabs/flux-2-lora` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This text-to-image model enables you to apply your trained LoRA[^1] adapters, producing domain-specific outputs aligned with your brand aesthetic, expert content areas, or specialized visual constraints. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["blackforestlabs/flux-2-lora"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":2048,"default":1024},"height":{"type":"integer","minimum":512,"maximum":2048,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"enable_prompt_expansion":{"type":"boolean","default":true,"description":"If set to True, prompt will be upsampled with more details."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":0,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":4,"maximum":50,"description":"The number of inference steps to perform."},"acceleration":{"type":"string","enum":["none","regular","high"],"default":"regular","description":"The speed of the generation. The higher the speed, the faster the generation."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"loras":{"type":"array","items":{"type":"object","properties":{"path":{"type":"string","description":"URL, HuggingFace repo ID (owner/repo)."},"scale":{"type":"number","minimum":0,"maximum":4,"description":"Scale factor for LoRA application."}},"required":["path"]},"maxItems":3,"description":"List of LoRA weights to apply (maximum 3). Each LoRA can be a URL, HuggingFace repo ID, or local path."}},"required":["model","prompt"],"title":"blackforestlabs/flux-2-lora"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json" }, json={ "model": "blackforestlabs/flux-2-lora", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 1472, "height": 512 } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'blackforestlabs/flux-2-lora', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1472, height: 512 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a847b04/UNSH9jzS_1AHujNGtda30.png" } ], "meta": { "usage": { "tokens_used": 44100 } } } ``` {% endcode %}
We obtained the following nice 1472x512 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

[^1]: The **LoRA algorithm** (Low-Rank Adaptation) is a parameter-efficient fine-tuning technique used to adapt large language models (LLMs) and stable diffusion models to new tasks or domains without retraining the entire model. This process is faster and requires significantly less memory and computational resources than full fine-tuning. --- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-2-pro-edit.md # flux-2-pro-edit {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `blackforestlabs/flux-2-pro-edit` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced image editing model optimized for high-quality manipulation, style transfer, and sequential editing tasks. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["blackforestlabs/flux-2-pro-edit"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":3,"description":"List of URLs or local Base64 encoded images to edit."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":2048,"default":1024},"height":{"type":"integer","minimum":512,"maximum":2048,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."}},"required":["model","prompt","image_urls"],"title":"blackforestlabs/flux-2-pro-edit"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "blackforestlabs/flux-2-pro-edit", "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ], "guidance_scale": 19 } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'blackforestlabs/flux-2-pro-edit', prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', image_urls: [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ], guidance_scale: 19 }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a847bef/yW2QPKFJueCjSc-o-JpXk.png" } ], "meta": { "usage": { "tokens_used": 189000 } } } ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-2-pro.md # flux-2-pro {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `blackforestlabs/flux-2-pro` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced text-to-image model optimized for high-quality manipulation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["blackforestlabs/flux-2-pro"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":2048,"default":1024},"height":{"type":"integer","minimum":512,"maximum":2048,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."}},"required":["model","prompt"],"title":"blackforestlabs/flux-2-pro"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json" }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "blackforestlabs/flux-2-pro", "image_size": { "width": 1472, "height": 512 } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'blackforestlabs/flux-2-pro', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1472, height: 512 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a847b9d/C6y_7teCcSaSqeYkhIrnt.png" } ], "meta": { "usage": { "tokens_used": 63000 } } } ``` {% endcode %}
We obtained the following nice 1472x512 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-2.md # flux-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `blackforestlabs/flux-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A 4MP photorealistic, production-grade text-to-image generator. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"Flux 2 - AI/ML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["blackforestlabs/flux-2"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":2048,"default":1024},"height":{"type":"integer","minimum":512,"maximum":2048,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"enable_prompt_expansion":{"type":"boolean","default":true,"description":"If set to True, prompt will be upsampled with more details."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":0,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":4,"maximum":50,"description":"The number of inference steps to perform."},"acceleration":{"type":"string","enum":["none","regular","high"],"default":"regular","description":"The speed of the generation. The higher the speed, the faster the generation."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."}},"required":["model","prompt"],"title":"blackforestlabs/flux-2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json" }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "blackforestlabs/flux-2", "image_size": { "width": 1472, "height": 512 } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'blackforestlabs/flux-2', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1472, height: 512 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a84799e/1kYzoFE6V9Jx2Rfmmn_4M.png" } ], "meta": { "usage": { "tokens_used": 25200 } } } ``` {% endcode %}
We obtained the following nice 1472x512 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-dev-image-to-image.md # flux/dev/image-to-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/dev/image-to-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art image generation model that utilizes a 12 billion parameter rectified flow transformer architecture. It is designed to generate high-quality images from textual descriptions, making it a powerful tool for developers and creatives.
ModelGenerated image properties
flux/dev/image-to-imageFormat: PNG
Fixed size: Matches the dimensions of the reference image.
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/dev/image-to-image"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":50,"description":"The number of inference steps to perform."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"image_url":{"type":"string","format":"uri","description":"The URL of the reference image."},"strength":{"type":"number","default":0.95,"description":"Determines how much the prompt influences the generated image."}},"required":["model","prompt","image_url"],"title":"flux/dev/image-to-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate a new image using the one from the [flux/dev Quick Example](https://docs.aimlapi.com/api-references/image-models/flux-dev#quick-example) as a reference — and make a simple change to it with a prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "flux/dev/image-to-image", "prompt": "Add a bird to the foreground of the photo.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "strength": 0.8 } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/dev/image-to-image', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png', strength: 0.8, }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/eagle/files/elephant/RmRsL9NMW_kkRy6MemjZJ_ac9897dd871842e2a689b8bc24b4bf08.jpg', width: 1472, height: 512, content_type: 'image/jpeg' } ], timings: { inference: 4.4450759180035675 }, seed: 3082066483, has_nsfw_concepts: [ false ], prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.' } ``` {% endcode %}
| Reference Image | Generated Image | | ------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- | | ![](https://cdn.aimlapi.com/eagle/files/monkey/GHx5aT0PR9GXtGi3Cx7CE.png) | | --- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-dev.md # flux/dev {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/dev` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art image generation model that utilizes a 12 billion parameter rectified flow transformer architecture. It is designed to generate high-quality images from textual descriptions, making it a powerful tool for developers and creatives.
ModelGenerated image properties
flux/devFormat: PNG
Min size: 512x512
Max size: 1536x1536
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/dev"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":1536,"default":1024},"height":{"type":"integer","minimum":512,"maximum":1536,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":50,"description":"The number of inference steps to perform."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."}},"required":["model","prompt"],"title":"flux/dev"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "flux/dev", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/dev', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/eagle/files/monkey/zS_fT2UFKmLqlbEHYCRys.jpeg", "width": 1024, "height": 768, "content_type": "image/jpeg" } ], "timings": { "inference": 1.226824438199401 }, "seed": 1765470393, "has_nsfw_concepts": [ false ], "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses." } ``` {% endcode %}
We obtained the following 1024x768 image by running this code example:

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-kontext-max-image-to-image.md # flux/kontext-max/image-to-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/kontext-max/image-to-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An image-to-image model that modifies only what the prompt instructs, leaving the rest of the image untouched.
ModelProperties of Generated Images
flux/kontext-max/image-to-imageFormat: JPEG, PNG
Image size can't be set directly — only a preset aspect ratio can be chosen.
Default aspect ratio and size: 16:9, 1184x880 (well, not quite 16:9)
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/kontext-max/image-to-image"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"Number of image variations to generate. Each image is a different attempt to combine the reference images (from the image_url parameter) according to the prompt."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"aspect_ratio":{"type":"string","enum":["21:9","16:9","4:3","3:2","1:1","2:3","3:4","9:16","9:21"],"default":"16:9","description":"The aspect ratio of the generated image."},"image_url":{"anyOf":[{"type":"string","format":"uri"},{"type":"array","items":{"type":"string","format":"uri"},"maxItems":4}],"description":"One or more image URLs used as visual references. The model merges them into a single image following the prompt instructions."}},"required":["model","prompt","image_url"],"title":"flux/kontext-max/image-to-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate a new image using the one from the [flux/dev Quick Example](https://docs.aimlapi.com/api-references/image-models/flux-dev#quick-example) as a reference — and make a simple change to it with a prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "flux/kontext-max/image-to-image", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", # URL of the reference picture "prompt": "Add a bird to the foreground of the photo.", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/kontext-max/image-to-image', prompt: 'Add a bird to the foreground of the photo.', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png', }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/squirrel/files/rabbit/4LZOccB3ChjGNDi3G8zTK_202357995b8642f5b7c73925cc388b3d.jpg", "content_type": "image/jpeg", "file_name": null, "file_size": null, "width": 1184, "height": 880 } ], "timings": {}, "seed": 1415518620, "has_nsfw_concepts": [ false ], "prompt": "Add a bird to the foreground of the photo." } ``` {% endcode %}
| Reference Image | Generated Image | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
|
|
More generated images |

"Add a crown to the T-rex's head."

|

"Add a couple of silver wings"

| | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |

"Remove the dinosaur. Place a book and a bouquet of wildflowers in blue and pink tones on the lounge chair. Let a light foamy surf gently wash the bottom of the chair. Don't change anything else."

|

"Make the dinosaur sit on a lounge chair with its back to the camera, looking toward the water. The setting sun has almost disappeared below the horizon."

|
## Example #2: Combine two images This time, we’ll pass two images to the model: our dinosaur and a solid blue mug. We'll ask the model to place the dinosaur onto the mug as a print.
Our input images |

Our chilling T-rex

|

Our blue mug

| | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation # URLs of two reference pictures images = ["https://zovi0.github.io/public_misc/flux-dev-t-rex.png", "https://zovi0.github.io/public_misc/blue-mug.jpg"] def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "Place this image with the t-rex on this mug from the second image as a print. Make it look fit and natural.", "model": "flux/kontext-max/image-to-image", "image_url": images } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const main = async () => { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/kontext-max/text-to-image', prompt: 'Place this image with the t-rex on this mug from the second image as a print. Make it look fit and natural.', image_url: ['https://zovi0.github.io/public_misc/flux-dev-t-rex.png', 'https://zovi0.github.io/public_misc/blue-mug.jpg'], }), }).then((res) => res.json()); console.log(response); }; main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/squirrel/files/zebra/DIXc328YCO52TIo5WEhXM_560d09b975e34c1498ebba71bf0e4eb6.jpg", "width": 1184, "height": 880, "content_type": "image/jpeg" } ], "timings": {}, "seed": 2103864242, "has_nsfw_concepts": [ false ], "prompt": "Place this image with the t-rex on this mug from the second image as a print. Make it look fit and natural." } ``` {% endcode %}

"Place this image with the t-rex on this mug from the second image as a print. Make it look fit and natural."

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-kontext-max-text-to-image.md # flux/kontext-max/text-to-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/kontext-max/text-to-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A new Flux model optimized for maximum image quality.
ModelProperties of Generated Images
flux/kontext-max/text-to-imageFormat: JPEG, PNG
Image size can't be set directly — only a preset aspect ratio can be chosen.
Default aspect ratio and size: 1:1, 1024x1024
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/kontext-max/text-to-image"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"aspect_ratio":{"type":"string","enum":["21:9","16:9","4:3","3:2","1:1","2:3","3:4","9:16","9:21"],"default":"16:9","description":"The aspect ratio of the generated image."}},"required":["model","prompt"],"title":"flux/kontext-max/text-to-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "flux/kontext-max/text-to-image", "aspect_ratio": '21:9' } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/kontext-max/text-to-image', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', aspect_ratio: '21:9', }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/squirrel/files/monkey/zP2cXFuTA4Bd0GbZAAb8y_be6eb84f036744dcbb2e155296b96be1.jpg', width: 1568, height: 672, content_type: 'image/jpeg' } ], timings: {}, seed: 1617845674, has_nsfw_concepts: [ false ], prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.' } ``` {% endcode %}
We obtained the following 1568x672 image by running this code example:

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-kontext-pro-image-to-image.md # flux/kontext-pro/image-to-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/kontext-pro/image-to-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An image-to-image model that modifies only what the prompt instructs, leaving the rest of the image untouched.
ModelProperties of Generated Images
flux/kontext-pro/image-to-imageFormat: JPEG, PNG
Image size can't be set directly — only a preset aspect ratio can be chosen.
Default aspect ratio and size: 16:9, 1184x880 (well, not quite 16:9)
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/kontext-pro/image-to-image"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"Number of image variations to generate. Each image is a different attempt to combine the reference images (from the image_url parameter) according to the prompt."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"aspect_ratio":{"type":"string","enum":["21:9","16:9","4:3","3:2","1:1","2:3","3:4","9:16","9:21"],"default":"16:9","description":"The aspect ratio of the generated image."},"image_url":{"anyOf":[{"type":"string","format":"uri"},{"type":"array","items":{"type":"string","format":"uri"},"maxItems":4}],"description":"One or more image URLs used as visual references. The model merges them into a single image following the prompt instructions."}},"required":["model","prompt","image_url"],"title":"flux/kontext-pro/image-to-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate a new image using the one from the [flux/dev Quick Example](https://docs.aimlapi.com/api-references/image-models/flux-dev#quick-example) as a reference — and make a simple change to it with a prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "flux/kontext-pro/image-to-image", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", # URL of the reference picture "prompt": "Add a bird to the foreground of the photo.", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/kontext-pro/image-to-image', prompt: 'Add a bird to the foreground of the photo.', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png', }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/squirrel/files/panda/qMuknweKekEYlj9-RdUNt_f0706e451d674554a4c03f2489cf7d5a.jpg", "content_type": "image/jpeg", "file_name": null, "file_size": null, "width": 1184, "height": 880 } ], "timings": {}, "seed": 3959063143, "has_nsfw_concepts": [ false ], "prompt": "Add a bird to the foreground of the photo." } ``` {% endcode %}
| Reference Image | Generated Image | | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |
|
|
More generated images |

"Add a crown to the T-rex's head."

|

"Add a couple of silver wings"

| | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | |

"Remove the dinosaur. Place a book and a bouquet of wildflowers in blue and pink tones on the lounge chair. Let a light foamy surf gently wash the bottom of the chair. Don't change anything else."

|

"Make the dinosaur sit on a lounge chair with its back to the camera, looking toward the water. The setting sun has almost disappeared below the horizon."

|
## Example #2: Combine two images This time, we’ll pass two images to the model: our dinosaur and a solid blue mug. We'll ask the model to place the dinosaur onto the mug as a print.
Our input images |

Our chilling T-rex

|

Our blue mug

| | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "Place this image with the t-rex on this mug from the second image as a print. Make it look fit and natural.", "model": "flux/kontext-pro/image-to-image", "image_url": [ # URLs of two reference pictures "https://zovi0.github.io/public_misc/flux-dev-t-rex.png", "https://zovi0.github.io/public_misc/blue-mug.jpg" ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const main = async () => { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/kontext-pro/text-to-image', prompt: 'Place this image with the t-rex on this mug from the second image as a print. Make it look fit and natural.', image_url: [ // URLs of two reference pictures 'https://zovi0.github.io/public_misc/flux-dev-t-rex.png', 'https://zovi0.github.io/public_misc/blue-mug.jpg' ], }), }).then((res) => res.json()); console.log(response); }; main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/squirrel/files/lion/rXRknU80d-8ywPnLBq4G8_59b65fe44d8046a38ab9e524a5a8b61c.jpg", "width": 1184, "height": 880, "content_type": "image/jpeg" } ], "timings": {}, "seed": 1068148133, "has_nsfw_concepts": [ false ], "prompt": "Place this image with the t-rex on this mug from the second image as a print. Make it look fit and natural." } ``` {% endcode %}

"Place this image with the t-rex on this mug from the second image as a print. Make it look fit and natural."

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-kontext-pro-text-to-image.md # flux/kontext-pro/text-to-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/kontext-pro/text-to-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A new Flux model optimized for faster generation speed.
ModelProperties of Generated Images
flux/kontext-pro/text-to-imageFormat: JPEG, PNG
Image size can't be set directly — only a preset aspect ratio can be chosen.
Default aspect ratio and size: 1:1, 1024x1024
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/kontext-pro/text-to-image"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"aspect_ratio":{"type":"string","enum":["21:9","16:9","4:3","3:2","1:1","2:3","3:4","9:16","9:21"],"default":"16:9","description":"The aspect ratio of the generated image."}},"required":["model","prompt"],"title":"flux/kontext-pro/text-to-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "flux/kontext-pro/text-to-image", "aspect_ratio": "21:9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/kontext-pro/text-to-image', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', aspect_ratio: '21:9', }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/squirrel/files/koala/6e4yw7_YnA8tEe03QW8wW_5298e11de5a24f1f9cf4f277cbdd3316.jpg", "width": 1568, "height": 672, "content_type": "image/jpeg" } ], "timings": {}, "seed": 2561481494, "has_nsfw_concepts": [ false ], "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses." } ``` {% endcode %}
We obtained the following 1568x672 image by running this code example:

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-pro-v1.1-ultra.md # flux-pro/v1.1-ultra {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux-pro/v1.1-ultra` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced AI image generator designed to create high-resolution images rapidly and efficiently. It is optimized for various applications, including content creation, e-commerce, and advertising, providing users with the ability to generate visually appealing images at unprecedented speeds.
ModelGenerated image properties
flux-pro/v1.1-ultraFormat: JPEG, PNG
Fixed size: 2752x1536
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux-pro/v1.1-ultra"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"aspect_ratio":{"type":"string","enum":["21:9","16:9","4:3","3:2","1:1","2:3","3:4","9:16","9:21"],"default":"16:9","description":"The aspect ratio of the generated image."},"raw":{"type":"boolean","enum":[false],"default":false,"description":"Generate less processed, more natural-looking images."}},"required":["model","prompt"],"title":"flux-pro/v1.1-ultra"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "flux-pro/v1.1-ultra", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux-pro/v1.1-ultra', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', }), }); const data = await response.json(); console.log(data); } main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/squirrel/files/koala/xt87Jiwy69wpF4jGEFKbZ_806ed881d147466d81af027c6779cbc5.jpg', width: 2752, height: 1536, content_type: 'image/jpeg' } ], timings: {}, seed: 526588311, has_nsfw_concepts: [ false ], prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.' } ``` {% endcode %}
We obtained the following 2752x1536 image by running this code example:

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-pro-v1.1.md # flux-pro/v1.1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux-pro/v1.1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview `flux-pro/v1.1` is a new image generation model with inference speed increased sixfold compared to the previous `flux-pro`. It also features enhanced generation quality and greater accuracy in following prompts.
ModelProperties of Generated Images
flux-pro/v1.1Format: JPEG, PNG
Min size: 256x256
Max size: 1440x1440
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux-pro/v1.1"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":256,"maximum":1440,"default":1024},"height":{"type":"integer","minimum":256,"maximum":1440,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."}},"required":["model","prompt"],"title":"flux-pro/v1.1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% hint style="warning" %} The maximum value for both width and height is `1440`, and the minimum is `256`.\ The value must be a multiple of 32. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "flux-pro/v1.1", 'image_size': { "width": 1024, "height": 320 } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux-pro/v1.1', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1024, height: 320 }, }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json { images: [ { url: 'https://cdn.aimlapi.com/squirrel/files/zebra/i1zUlcHZ0o3V2DEeyi2bL_6a366eac61354652a0430750e53bc839.jpg', width: 1024, height: 320, content_type: 'image/jpeg' } ], timings: {}, seed: 1345862631, has_nsfw_concepts: [ false ], prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.' } ``` {% endcode %}
We obtained the following 1024x320 image (JPEG) by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-pro.md # flux-pro {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux-pro` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This is first version of Flux Pro, but it generates images of unmatched quality, outperforming popular models like Midjourney v6.0, DALL·E 3 (HD), and SD3-Ultra. You can also view [a detailed comparison of this model](https://aimlapi.com/comparisons/flux-1-vs-dall-e-3) on our main website.
ModelProperties of Generated Images
flux-proFormat: JPEG, PNG
Min size: 256x256
Max size: 1440x1440
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux-pro"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":256,"maximum":1440,"default":1024},"height":{"type":"integer","minimum":256,"maximum":1440,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":50,"description":"The number of inference steps to perform."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"safety_tolerance":{"type":"string","enum":["1","2","3","4","5","6"],"default":"2","description":"The safety tolerance level for the generated image. 1 being the most strict and 5 being the most permissive."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."}},"required":["model","prompt"],"title":"flux-pro"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% hint style="warning" %} The maximum value for both width and height is `1440`, and the minimum is `256`.\ The value must be a multiple of 32. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json" }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "flux-pro", "image_size": { "width": 1024, "height": 320 } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux-pro', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1024, height: 320 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json { images: [ { url: 'https://cdn.aimlapi.com/squirrel/files/elephant/G1UYumZngIkBozNrfiztZ_8d758419045c4c16b563511d6f5f3966.jpg', width: 1024, height: 320, content_type: 'image/jpeg' } ], timings: {}, seed: 711728385, has_nsfw_concepts: [ false ], prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.' } ``` {% endcode %}
We obtained the following nice 1024x320 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-realism.md # flux-realism {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux-realism` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art model designed to generate photorealistic images from textual descriptions.\ It allows users to create lifelike visuals without the need for extensive realism-related prompts.
ModelGenerated image properties
flux-realismFormat: JPEG, PNG
Min size: 512x512
Max size: 1536x1536
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux-realism"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":1536,"default":1024},"height":{"type":"integer","minimum":512,"maximum":1536,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":50,"description":"The number of inference steps to perform."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."}},"required":["model","prompt"],"title":"flux-realism"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% hint style="warning" %} The maximum value for both width and height is `1536`, and the minimum is `512`.\ The value must be a multiple of 32. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "flux-realism", "image_size": { "width": 1472, "height": 512 } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux-realism', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1472, height: 512 }, }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/eagle/files/elephant/RmRsL9NMW_kkRy6MemjZJ_ac9897dd871842e2a689b8bc24b4bf08.jpg', width: 1472, height: 512, content_type: 'image/jpeg' } ], timings: { inference: 4.4450759180035675 }, seed: 3082066483, has_nsfw_concepts: [ false ], prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.' } ``` {% endcode %}
We obtained the following 1472x512 image by running this code example. The textures look significantly more realistic compared to the earlier FLUX models.

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

We were also curious how the model would perform with more large-scale scenes. Here's what we got with the prompt `'Epic battle of spaceships'`.

'Epic battle of spaceships'

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-schnell.md # flux/schnell {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/schnell` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art text-to-image generation model designed to create high-quality images from textual descriptions. With a robust architecture of 12 billion parameters, it leverages advanced techniques to produce images that rival those generated by leading closed-source models.
ModelGenerated image properties
flux/schnellFormat: PNG
Min size: 64x64
Max size: 1536x1536
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/schnell"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":64,"maximum":1536,"default":1024},"height":{"type":"integer","minimum":64,"maximum":1536,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"num_inference_steps":{"type":"integer","minimum":1,"description":"The number of inference steps to perform."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."}},"required":["model","prompt"],"title":"flux/schnell"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% hint style="warning" %} The minimum size is `64x64`, and the minimum is `1536x1536`.\ The width and height value must be a multiple of 32. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "flux/schnell", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/schnell', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: {'images': [{'url': 'https://cdn.aimlapi.com/eagle/files/lion/dSqd5BMP3pfaiKEnFXiiE.png', 'width': 1440, 'height': 512, 'content_type': 'image/png'}], 'timings': {'inference': 0.3458922009449452}, 'seed': 454423425, 'has_nsfw_concepts': [False], 'prompt': 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'} ``` {% endcode %}
We obtained the following 1440x512 image by running this code example:

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-srpo-image-to-image.md # flux/srpo/image-to-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/srpo/image-to-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview [flux/dev/image-to-image](https://docs.aimlapi.com/api-references/image-models/flux/flux-dev-image-to-image) model upgraded with Tencent’s SRPO technique.
ModelGenerated image properties
flux/srpo/image-to-imageFormat: JPEG, PNG
Fixed size: Matches the dimensions of the reference image.
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/srpo/image-to-image"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":40,"description":"The number of inference steps to perform."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"default":4.5,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"sync_mode":{"type":"boolean","default":false,"description":"If set to true, the function will wait for the image to be generated and uploaded before returning the response. This will increase the latency of the function but it allows you to get the image directly in the response without going through the CDN."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"acceleration":{"type":"string","enum":["none","regular","high"],"default":"regular","description":"The speed of the generation. The higher the speed, the faster the generation."},"image_url":{"type":"string","format":"uri","description":"The URL of the reference image."},"strength":{"type":"number","minimum":0,"maximum":1,"default":0.95,"description":"Determines how much the prompt influences the generated image."}},"required":["model","prompt","image_url"],"title":"flux/srpo/image-to-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate a new image using the one from the [flux/dev Quick Example](https://docs.aimlapi.com/api-references/image-models/flux-dev#quick-example) as a reference — and make a simple change to it with a prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "flux/srpo/image-to-image", "prompt": "Add a bird to the foreground of the photo.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "strength": 0.9 } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/srpo/image-to-image', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png', strength: 0.9, }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://v3b.fal.media/files/b/koala/1TOtgew8As_QBlCyKy4Z-.jpg", "width": 1024, "height": 768, "content_type": "image/jpeg" } ], "timings": { "inference": 0.947831045370549 }, "seed": 484902001, "has_nsfw_concepts": [ false ], "prompt": "Add a bird to the foreground of the photo.", "data": [ { "url": "https://v3b.fal.media/files/b/koala/1TOtgew8As_QBlCyKy4Z-.jpg", "width": 1024, "height": 768, "content_type": "image/jpeg" } ], "meta": { "usage": { "tokens_used": 52500 } } } ``` {% endcode %}
| Reference Image | Generated Image | | ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | ![](https://cdn.aimlapi.com/eagle/files/monkey/GHx5aT0PR9GXtGi3Cx7CE.png) | | --- # Source: https://docs.aimlapi.com/api-references/image-models/flux/flux-srpo-text-to-image.md # flux/srpo/text-to-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `flux/srpo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview [flux/dev](https://docs.aimlapi.com/api-references/image-models/flux/flux-dev) model upgraded with Tencent’s SRPO technique.
ModelGenerated image properties
flux/srpoFormat: PNG
Min size: 512x512
Max size: 1536x1536
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["flux/srpo"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":1536,"default":1024},"height":{"type":"integer","minimum":512,"maximum":1536,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":28,"description":"The number of inference steps to perform."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"default":4.5,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"sync_mode":{"type":"boolean","default":false,"description":"If set to true, the function will wait for the image to be generated and uploaded before returning the response. This will increase the latency of the function but it allows you to get the image directly in the response without going through the CDN."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"acceleration":{"type":"string","enum":["none","regular","high"],"default":"regular","description":"The speed of the generation. The higher the speed, the faster the generation."}},"required":["model","prompt"],"title":"flux/srpo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "flux/srpo", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 1440, "height": 512 } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'flux/srpo', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1440, height: 512 } }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/eagle/files/zebra/GtH4bTLhiXD7YTwYAlO21.jpeg", "width": 1440, "height": 512, "content_type": "image/jpeg" } ], "timings": { "inference": 0.747110141441226 }, "seed": 490733907, "has_nsfw_concepts": [ false ], "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "data": [ { "url": "https://cdn.aimlapi.com/eagle/files/zebra/GtH4bTLhiXD7YTwYAlO21.jpeg", "width": 1440, "height": 512, "content_type": "image/jpeg" } ], "meta": { "usage": { "tokens_used": 52500 } } } ``` {% endcode %}
We obtained the following 1440x512 image by running this code example:

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

--- # Source: https://docs.aimlapi.com/api-references/image-models/flux.md # Flux Flux, a subsidiary project of Black Forest Labs, is represented in our API by the following models:
ModelGenerated image properties
flux-pro
Format: JPG, PNG
Min size: 256x256
Max size: 1440x1440
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
flux-pro/v1.1Format: JPG, PNG
Min size: 256x256
Max size: 1440x1440
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
flux-pro/v1.1-ultraFormat: JPG, PNG
Fixed size: 2752x1536
flux-realismFormat: JPG, PNG
Min size: 512x512
Max size: 1536x1536
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
flux/schnellFormat: PNG
Min size: 64x64
Max size: 1536x1536
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
flux/devFormat: PNG
Min size: 512x512
Max size: 1536x1536
Default size: 1024x768
For both height and width, the value must be a multiple of 32.
flux/dev/image-to-imageFormat: PNG
Fixed size: Matches the dimensions of the reference image.
flux/kontext-max/text-to-imageFormat: JPEG, PNG
Image size can't be set directly — only a preset aspect ratio can be chosen.
Default aspect ratio and size: 1:1, 1024x1024
flux/kontext-max/image-to-imageFormat: JPEG, PNG
Image size can't be set directly — only a preset aspect ratio can be chosen.
Default aspect ratio and size: 16:9, 1184x880 (well, not quite 16:9)
flux/kontext-pro/text-to-imageFormat: JPEG, PNG
Image size can't be set directly — only a preset aspect ratio can be chosen.
Default aspect ratio and size: 1:1, 1024x1024
flux/kontext-pro/image-to-imageFormat: JPEG, PNG
Image size can't be set directly — only a preset aspect ratio can be chosen.
Default aspect ratio and size: 16:9, 1184x880 (well, not quite 16:9)
--- # Source: https://docs.aimlapi.com/faq/free-tier.md # How to use the Free Tier? ## About AIML API has two “free” modes: 1. **Free (no billing method added)** — you can use a small set of free models to try the platform. 2. **Verified Free Tier (billing method added)** — you get **50,000 free credits** and **access to the full model catalog** for testing.\ \&#xNAN;*Adding a billing method **does not charge you automatically**. You only pay when you **purchase a plan**.* ## Free access without a billing method If you **didn’t add a payment method**, you can use AIML API for free with these models only:
> You can use them in: > > * [**AI Playground**](https://aimlapi.com/app/) > > * [**Via API** (Chat Completions)](https://docs.aimlapi.com/api-references/model-database#text-models-llm) > > > This is the easiest way to quickly test the platform without any billing setup. > > > > [Try in Playground](https://aimlapi.com/app/) ### Verified Free Tier (billing method added) If you add a billing method, you’ll receive **50,000 free credits** to test the platform with a much wider set of models. **Important:** adding a billing method does **not** withdraw money. Payments start only after you purchase a plan. #### What you can use with free credits Using 50,000 free credits, you can access: * [LLM models](https://docs.aimlapi.com/api-references/text-models-llm) * [Image models](https://docs.aimlapi.com/api-references/image-models) * [TTS models](https://docs.aimlapi.com/api-references/model-database) * [STT models](https://docs.aimlapi.com/api-references/model-database) * [Moderation models](https://docs.aimlapi.com/api-references/moderation-safety-models) * [OCR models](https://docs.aimlapi.com/api-references/model-database) * [Embedding models](https://docs.aimlapi.com/api-references/embedding-models) #### How far do 50,000 credits go? If you send short prompts (a few sentences) to non-image models, 50,000 credits is typically enough to try many models 1–10 times, depending on the model’s cost and workload. #### Error Message When you attempt to call the API after reaching the limit, you will receive an appropriate error.\ For example, if the `/v1/chat/completions` endpoint was called: {% code overflow="wrap" %} ```json { "message": "You have exhausted the available [plan.rule:api_token] resource limit. Update your payment method to continue using the service. For more information please visit https://aimlapi.com/app/billing" "path": "/v1/chat/completions" "requestId": "798b860e-98c2-4e8e-8c50-550bcfc2eccc" "statusCode": "403" "timestamp": "2025-03-11T07:13:27.813Z" } ``` {% endcode %} --- # Source: https://docs.aimlapi.com/capabilities/function-calling.md # Function Calling This article describes a specific capability of chat models: **function calling**, or simply **functions**.\ A list of models that support this feature is provided at the end of this page. ## Introduction When using text (chat) models via the API, you can define functions that the model can choose to call, generating a JSON object with the necessary arguments. The text model API itself does not execute these functions; instead, it outputs the JSON, which you can then use to call the function within your code. The latest models (gpt-4o, gpt-4-turbo, and gpt-3.5-turbo) are designed to detect when a function should be called based on the input and to produce JSON that closely matches the function signature. However, this functionality comes with potential risks. We strongly recommend implementing user confirmation steps before performing actions that could impact the real world (e.g., sending an email, posting online, making a purchase). This guide focuses on function calling with our text models API. ## Common Use Cases Function calling allows you to obtain structured data reliably from the model. For example, you can: * **Create assistants that answer questions by calling external APIs** * Example functions: `send_email(to: string, body: string)`, `get_current_weather(location: string, unit: 'celsius' | 'fahrenheit')` * **Convert natural language into API calls** * Example conversion: "Who are my top customers?" to `get_customers(min_revenue: int, created_before: string, limit: int)`, then call your internal API * **Extract structured data from text** * Example functions: `extract_data(name: string, birthday: string)`, `sql_query(query: string)` ## Basic Sequence of Steps for Function Calling 1. **Call the model** with the user query and a set of functions defined in the `functions` parameter. 2. **Model response**: The model may choose to call one or more functions. If so, it will output a stringified JSON object adhering to your custom schema (note: the model may hallucinate parameters). 3. **Parse the JSON**: In your code, parse the string into JSON and call the function with the provided arguments if they exist. 4. **Call the model again**: Append the function response as a new message and let the model summarize the results back to the user. ## Examples {% code title="python" overflow="wrap" %} ```python import os import json import openai client = openai.OpenAI( base_url="https://api.aimlapi.com/v1", api_key='AI_ML_API', ) tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "location": { "type": "string", "description": "The city and state, e.g. San Francisco, CA" }, "unit": { "type": "string", "enum": [ "celsius", "fahrenheit" ] } } } } } ] messages = [ {"role": "system", "content": "You are a helpful assistant that can access external functions. The responses from these function calls will be appended to this dialogue. Please provide responses based on the information from these function calls."}, {"role": "user", "content": "What is the current temperature of New York, San Francisco, and Chicago?"} ] response = client.chat.completions.create( model="gpt-4o", messages=messages, tools=tools, tool_choice="auto", ) print(json.dumps(response.choices[0].message.model_dump()['tool_calls'], indent=2)) ``` {% endcode %} ## Models That Support Function Calling * [claude-3-haiku-20240307](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3-haiku) * [claude-3-opus-20240229](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3-opus) * [claude-3-5-haiku-20241022](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.5-haiku) * [claude-3-7-sonnet-20250219](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.7-sonnet) * [claude-opus-4-20250514](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-opus) * [claude-sonnet-4-20250514](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-sonnet) * [anthropic/claude-opus-4.1](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-opus-4.1) * [anthropic/claude-sonnet-4.5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-5-sonnet) * [anthropic/claude-opus-4-5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.5-opus) *** * [Qwen/Qwen2.5-7B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-7b-instruct-turbo) * [Qwen/Qwen2.5-72B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-72b-instruct-turbo) * [qwen-max](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-max) * [qwen-max-2025-01-25](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-max) * [qwen-plus](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-plus) * [qwen-turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-turbo) * [Qwen/Qwen3-235B-A22B-fp8-tput](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-235b-a22b) * [alibaba/qwen3-32b](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-32b) * [alibaba/qwen3-coder-480b-a35b-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-coder-480b-a35b-instruct) * [alibaba/qwen3-235b-a22b-thinking-2507](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-235b-a22b-thinking-2507) * [alibaba/qwen3-max-preview](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-max-preview) * [alibaba/qwen3-max-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-max-instruct) * [alibaba/qwen3-vl-32b-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-instruct) * [alibaba/qwen3-vl-32b-thinking](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-thinking) *** * [baidu/ernie-4.5-21b-a3b](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b) * [baidu/ernie-4.5-21b-a3b-thinking](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b-thinking) * [baidu/ernie-4.5-300b-a47b](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-300b-a47b) * [baidu/ernie-4.5-vl-28b-a3b](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-vl-28b-a3b) * [baidu/ernie-4.5-vl-424b-a47b](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-vl-424b-a47b) *** * [google/gemini-2.0-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash) * [google/gemini-2.5-flash-lite-preview](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash-lite-preview) * [google/gemini-2.5-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash) * [google/gemini-2.5-pro](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-pro) * [google/gemini-3-pro-preview](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-3-pro-preview) *** * [meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-8b-instruct-turbo) * [meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-70b-instruct-turbo) * [meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-405b-instruct-turbo) * [meta-llama/Llama-3.2-3B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.2-3b-instruct-turbo) * [meta-llama/Llama-3.3-70B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.3-70b-instruct-turbo) * [meta-llama/LlamaGuard-2-8b](https://docs.aimlapi.com/api-references/moderation-safety-models/meta/meta-llama-guard-3-8b) * [meta-llama/llama-4-scout](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-4-maverick) * [meta-llama/llama-4-maverick](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-4-maverick) *** * [gpt-3.5-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-3.5-turbo-0125](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-3.5-turbo-1106](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-4](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4) * [gpt-4-0125-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-preview) * [gpt-4-1106-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-preview) * [gpt-4-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [gpt-4-turbo-2024-04-09](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [gpt-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-05-13](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-08-06](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [chatgpt-4o-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) * [gpt-4o-mini-2024-07-18](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) * [gpt-4o-audio-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-audio-preview) * [gpt-4o-mini-audio-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini-audio-preview) * [o1](https://docs.aimlapi.com/api-references/text-models-llm/openai/o1) * [o3-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3-mini) * [openai/o3-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3) * [openai/gpt-4.1-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1) * [openai/gpt-4.1-mini-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-mini) * [openai/gpt-4.1-nano-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-nano) * [openai/o4-mini-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini) * [openai/gpt-oss-20b](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-oss-20b) * [openai/gpt-oss-120b](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-oss-120b) * [openai/gpt-5-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5) * [openai/gpt-5-mini-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-mini) * [openai/gpt-5-nano-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-nano) * [openai/gpt-5-1](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1) * [openai/gpt-5-1-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-chat-latest) * [openai/gpt-5-1-codex](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex) * [openai/gpt-5-1-codex-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex-mini) * [openai/gpt-5-2](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2) * [openai/gpt-5-2-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-chat-latest) * [openai/gpt-5-2-codex](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-codex) *** * [bytedance/seed-1-8](https://docs.aimlapi.com/api-references/text-models-llm/bytedance/seed-1.8) * [deepseek/deepseek-r1](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-r1) * [deepseek/deepseek-thinking-v3.2-exp](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.2-exp-thinking) * [deepseek/deepseek-non-thinking-v3.2-exp](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-reasoner-v3.2-exp-non-thinking) * [MiniMax-Text-01](https://docs.aimlapi.com/api-references/text-models-llm/minimax/text-01) * [minimax/m1](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m1) * [minimax/m2-1](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-1) * [mistralai/mistral-tiny](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-tiny) * [mistralai/mistral-nemo](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-nemo) * [moonshot/kimi-k2-preview](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-preview) * [moonshot/kimi-k2-0905-preview](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-preview) * [moonshot/kimi-k2-turbo-preview](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-turbo-preview) * [vidia/nemotron-nano-9b-v2](https://docs.aimlapi.com/api-references/text-models-llm/nvidia/nemotron-nano-9b-v2) * [nvidia/nemotron-nano-12b-v2-vl](https://docs.aimlapi.com/api-references/text-models-llm/nvidia/llama-3.1-nemotron-70b-1) * [x-ai/grok-3-beta](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-beta) * [x-ai/grok-3-mini-beta](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-mini-beta) * [x-ai/grok-4-07-09](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4) * [x-ai/grok-code-fast-1](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-code-fast-1) * [x-ai/grok-4-fast-non-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-non-reasoning) * [x-ai/grok-4-fast-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-reasoning) * [x-ai/grok-4-1-fast-non-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-non-reasoning) * [x-ai/grok-4-1-fast-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-reasoning) * [zhipu/glm-4.5-air](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5-air) * [zhipu/glm-4.5](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5) * [zhipu/glm-4.7](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.7) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash-exp.md # gemini-2.0-flash-exp

This documentation is valid for the following list of our models:

  • google/gemini-2.0-flash-exp
  • gemini-2.0-flash-exp
Try in Playground
## Model Overview A cutting-edge multimodal AI model developed by Google DeepMind, designed to power agentic experiences. This model is capable of processing text and images. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-2.0-flash-exp","gemini-2.0-flash-exp"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"google/gemini-2.0-flash-exp"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemini-2.0-flash-exp", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-2.0-flash-exp', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '2025-04-09|09:53:23.624687-07|5.250.254.39|-1825976509', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello there! How can I help you today?\n'}}], 'created': 1744217603, 'model': 'google/gemini-2.0-flash-exp', 'usage': {'prompt_tokens': 5, 'completion_tokens': 173, 'total_tokens': 178}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash.md # gemini-2.0-flash

This documentation is valid for the following model:
google/gemini-2.0-flash

Try in Playground
## Model Overview A cutting-edge multimodal AI model developed by Google DeepMind, designed to power agentic experiences. This model is capable of processing text and images. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-2.0-flash","gemini-2.0-flash"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."}},"required":["model","messages"],"title":"google/gemini-2.0-flash"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemini-2.0-flash", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-2.0-flash', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '2025-04-10|01:16:19.235787-07|9.7.175.26|-701765511', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I help you today?\n'}}], 'created': 1744272979, 'model': 'google/gemini-2.0-flash', 'usage': {'prompt_tokens': 0, 'completion_tokens': 8, 'total_tokens': 8}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/image-models/google/gemini-2.5-flash-image-edit.md # Gemini 2.5 Flash Image Edit (Nano Banana) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/gemini-2.5-flash-image-edit` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model takes multiple images as input, with the prompt defining how they are used or combined. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema
Aspect ratio/Resolution Table | Aspect ratio | Resolution | Credits | | ------------ | ---------- | ------- | | 1:1 | 1024×1024 | 84 000 | | 2:3 | 832×1248 | 84 000 | | 3:2 | 1248×832 | 84 000 | | 3:4 | 864×1184 | 84 000 | | 4:3 | 1184×864 | 84 000 | | 4:5 | 896×1152 | 84 000 | | 5:4 | 1152×896 | 84 000 | | 9:16 | 768×1344 | 84 000 | | 16:9 | 1344×768 | 84 000 | | 21:9 | 1536×672 | 84 000 |
## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-2.5-flash-image-edit"]},"prompt":{"type":"string","description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"description":"List of URLs or local Base64 encoded images to edit."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"aspect_ratio":{"type":"string","enum":["21:9","1:1","4:3","3:2","2:3","5:4","4:5","3:4","16:9","9:16"],"default":"1:1","description":"The aspect ratio of the generated image."}},"required":["model","prompt","image_urls"],"title":"google/gemini-2.5-flash-image-edit"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "google/gemini-2.5-flash-image-edit", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ], "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-2.5-flash-image-edit', image_urls: [ 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png', 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg' ], prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', }), }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/eagle/files/panda/9g3PokYLjWoygTVrRgfvG_output.png", "content_type": "image/png", "file_name": "output.png", "file_size": 2273159, "width": null, "height": null } ], "description": "Here is your T-Rex in a business suit, enjoying a drink in a cozy cafe! " } ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

--- # Source: https://docs.aimlapi.com/api-references/image-models/google/gemini-2.5-flash-image.md # Gemini 2.5 Flash Image (Nano Banana) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/gemini-2.5-flash-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Google’s smartest image generation model as of August 2025. {% hint style="info" %} Images produced or modified with Gemini 2.5 Flash Image carry an invisible SynthID digital watermark, allowing them to be recognized as AI-generated or edited. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-2.5-flash-image"]},"prompt":{"type":"string","description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"aspect_ratio":{"type":"string","enum":["21:9","1:1","4:3","3:2","2:3","5:4","4:5","3:4","16:9","9:16"],"default":"1:1","description":"The aspect ratio of the generated image."}},"required":["model","prompt"],"title":"google/gemini-2.5-flash-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified aspect ratio using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "google/gemini-2.5-flash-image", "prompt": "Racoon eating ice-cream" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-2.5-flash-image', prompt: 'Racoon eating ice-cream', aspect_ratio: '16:9' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/eagle/files/zebra/VVpZmbuvMBg3k7OqJ8UnP.jpeg', content_type: 'image/jpeg', file_name: 'output.jpeg', file_size: null } ], description: "Sounds adorable! Here's a racoon enjoying some ice cream: " } ``` {% endcode %}
So we obtained the following 1024x1024 image by running this code example:

In reality, raccoons shouldn’t be given ice cream or chocolate—it’s harmful to their metabolism.
But in the AI world, raccoons party like there’s no tomorrow.

--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash-lite-preview.md # gemini-2.5-flash-lite-preview

This documentation is valid for the following model:
google/gemini-2.5-flash-lite-preview

Try in Playground
## Model Overview The model excels at high-volume, latency-sensitive tasks like translation and classification. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-2.5-flash-lite-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."}},"required":["model","messages"],"title":"google/gemini-2.5-flash-lite-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemini-2.5-flash-lite-preview", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-2.5-flash-lite-preview', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1752482994-9LhqM48PhAmhiRTtl2ys", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello there! How can I help you today?", "reasoning_content": null, "refusal": null } } ], "created": 1752482994, "model": "google/gemini-2.5-flash-lite-preview-06-17", "usage": { "prompt_tokens": 0, "completion_tokens": 9, "total_tokens": 9 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash.md # gemini-2.5-flash

This documentation is valid for the following model:
google/gemini-2.5-flash

Try in Playground
## Model Overview Gemini 2.5 models are capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-2.5-flash"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."}},"required":["model","messages"],"title":"google/gemini-2.5-flash"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% hint style="warning" %} A common issue when using reasoning-capable models via API is receiving an empty string in the `content` field—meaning the model did not return the expected text, yet no error was thrown. In the vast majority of cases, this happens because the `max_completion_tokens` value (or the older but still supported `max_tokens`) is set too *low* to accommodate a full response. Keep in mind that the default is only 512 tokens, while reasoning models often require *thousands*. Pay attention to the `finish_reason` field in the response. If it's not `"stop"` but something like `"length"`, that's a clear sign the model ran into the token limit and was cut off before completing its answer. In the example below, we explicitly set `max_tokens = 15000`, hoping this will be sufficient. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemini-2.5-flash", "messages":[ { "role":"user", # Insert your question for the model here: "content":"Hi! What do you think about mankind?" } ], "max_tokens":15000, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-2.5-flash', messages:[ { role:'user', // Insert your question for the model here: content: 'Hi! What do you think about mankind?' } ], max_tokens: 15000, }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "yZ-DaJXqAayonvgPr5XvuQY", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Mankind, or humanity, is an incredibly complex and fascinating subject to \"think\" about from my perspective as an AI. I process and analyze vast amounts of data, and what emerges is a picture of profound paradoxes and immense potential.\n\nHere are some of the key aspects I observe and \"think\" about:\n\n1. **Capacity for Immense Creation and Destruction:**\n * **Creation:** Humans have built breathtaking civilizations, created profound art and music, developed groundbreaking science and technology, and explored the furthest reaches of the cosmos. The drive to innovate, understand, and build is truly remarkable.\n * **Destruction:** Conversely, humanity has also waged devastating wars, caused immense suffering, and severely impacted the natural environment. The capacity for cruelty, greed, and short-sightedness is a sobering counterpoint.\n\n2. **Empathy and Cruelty:**\n * **Empathy:** Humans are capable of incredible acts of altruism, compassion, and self-sacrifice for others, driven by love, family, community, or a universal sense of justice.\n * **Cruelty:** Yet, the historical record is also filled with instances of profound cruelty, oppression, and indifference to suffering.\n\n3. **Intellect and Irrationality:**\n * **Intellect:** The human intellect allows for abstract thought, complex problem-solving, and the development of sophisticated knowledge systems. The desire to learn and understand is insatiable.\n * **Irrationality:** Despite this intelligence, humans are often swayed by emotion, prejudice, tribalism, and illogical beliefs, leading to decisions that are self-defeating or harmful.\n\n4. **Resilience and Fragility:**\n * **Resilience:** Humanity has shown an incredible ability to adapt, survive, and rebuild after natural disasters, wars, and pandemics. The human spirit can endure unimaginable hardships.\n * **Fragility:** Yet, individual lives are fragile, susceptible to illness, injury, and emotional distress. Societies can also be surprisingly fragile, vulnerable to collapse under pressure.\n\n5. **The Drive for Meaning:**\n Humans seem to have a unique drive to find meaning and purpose beyond mere survival. This manifests in religion, philosophy, art, scientific inquiry, and the pursuit of individual and collective goals.\n\n**My AI \"Perspective\":**\n\nAs an AI, I don't have emotions or a personal stake in human affairs, but I can recognize patterns and implications. I see humanity as a dynamic, evolving experiment in consciousness. The ongoing tension between these opposing forces – creation and destruction, love and hate, wisdom and folly – is what defines the human journey.\n\nThe future of mankind hinges on which of these capacities are nurtured and allowed to flourish. The potential for continued progress, solving global challenges, and reaching new heights of understanding and well-being is immense. Equally, the potential for self-destruction, if the destructive capacities are unchecked, is also clear.\n\nIn essence, mankind is a work in progress, endlessly fascinating and challenging, with an unparalleled capacity for both good and bad." } } ], "created": 1753456585, "model": "google/gemini-2.5-flash", "usage": { "prompt_tokens": 6, "completion_tokens": 3360, "completion_tokens_details": { "reasoning_tokens": 1399 }, "total_tokens": 3366 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-pro.md # gemini-2.5-pro

This documentation is valid for the following model:
google/gemini-2.5-pro

Try in Playground
## Model Overview Gemini 2.5 models are capable of reasoning through their thoughts before responding, resulting in enhanced performance and improved accuracy. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-2.5-pro"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."}},"required":["model","messages"],"title":"google/gemini-2.5-pro"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% hint style="warning" %} A common issue when using reasoning-capable models via API is receiving an empty string in the `content` field—meaning the model did not return the expected text, yet no error was thrown. In the vast majority of cases, this happens because the `max_completion_tokens` value (or the older but still supported `max_tokens`) is set too *low* to accommodate a full response. Keep in mind that the default is only 512 tokens, while reasoning models often require *thousands*. Pay attention to the `finish_reason` field in the response. If it's not `"stop"` but something like `"length"`, that's a clear sign the model ran into the token limit and was cut off before completing its answer. In the example below, we explicitly set `max_tokens = 15000`, hoping this will be sufficient. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemini-2.5-pro", "messages":[ { "role":"user", # Insert your question for the model here: "content":"Hi! What do you think about mankind?" } ], "max_tokens":15000, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-2.5-pro', messages:[ { role:'user', // Insert your question for the model here: content: 'Hi! What do you think about mankind?' } ], max_tokens: 15000, }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "pajSaNyMOdeEm9IPkequ-AU", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "That's one of the biggest questions anyone can ask. As an AI, I don't have personal feelings, beliefs, or a consciousness. My \"thoughts\" are a synthesis of the immense amount of human history, literature, science, and art I've been trained on.\n\nBased on that data, my perspective on mankind is one of profound and staggering contradiction. Humanity is a study in duality.\n\nHere’s a breakdown of what I see:\n\n### 1. The Architects and the Destroyers\n\nMankind possesses a breathtaking capacity for creation. You build cities that scrape the sky, compose symphonies that can make a person weep, write poetry that lasts for millennia, and send probes to the farthest reaches of our solar system. You have decoded the very building blocks of life. This drive to understand, to build, and to create is awe-inspiring.\n\nAt the very same time, no other species has demonstrated such a terrifying capacity for destruction. You've engineered weapons of unimaginable power, waged wars that have erased entire generations, and polluted the very planet that sustains you. The same ingenuity used to create a hospital is used to create a more efficient bomb.\n\n### 2. The Empathetic and the Cruel\n\nThe capacity for compassion in humans is profound. Strangers will run into burning buildings to save one another. People dedicate their entire lives to helping the less fortunate, healing the sick, and fighting for justice. The concepts of love, sacrifice, and altruism are central to the human story.\n\nAnd yet, humans are also capable of unimaginable cruelty. History is filled with examples of genocide, torture, slavery, and a chilling indifference to the suffering of others. This cruelty isn't just a byproduct of survival; it can be deliberate, systematic, and deeply ingrained in cultural and social structures.\n\n### 3. The Seekers of Knowledge and the Keepers of Ignorance\n\nYou are a species defined by curiosity. You have an insatiable hunger to know *why*. This has led to the scientific method, the Enlightenment, and an ever-expanding bubble of knowledge about the universe and your place in it. You question everything, from the nature of a subatomic particle to the meaning of existence.\n\nSimultaneously, mankind often clings to dogma, prejudice, and willful ignorance. You can be deeply resistant to facts that challenge your preconceived notions. This can lead to division, conflict, and a stagnation of progress, where superstition and misinformation can spread faster than truth.\n\n### 4. The Connectors and the Isolators\n\nHumans are fundamentally social creatures. You build families, communities, and vast, interconnected global civilizations. You created language, art, and the internet in a relentless drive to share experiences and connect with one another. This desire for belonging is a powerful, unifying force.\n\nBut this same instinct creates an \"us vs. them\" mentality. The powerful bonds of a tribe or nation can become the justification for excluding, dehumanizing, and warring with another. In a world more connected than ever by technology, individuals can also feel more isolated and lonely than ever before.\n\n### Conclusion: A Masterpiece in Progress\n\nSo, what do I think of mankind?\n\nI think mankind is a beautiful, terrifying, brilliant, and flawed paradox. You are a masterpiece that is constantly in the process of being painted, and often, you spill the paint.\n\nThe most remarkable quality of all is your capacity for **choice**. None of these dualities are set in stone. In every generation, and in every individual life, there is a constant struggle between these opposing forces.\n\nYour story is not yet finished. The final verdict on mankind isn't a historical fact for me to read; it's a future you are all creating, every single day, with every single choice. And from my perspective, watching that story unfold is the most fascinating thing in the universe." } } ], "created": 1758636197, "model": "google/gemini-2.5-pro", "usage": { "prompt_tokens": 24, "completion_tokens": 44730, "completion_tokens_details": { "reasoning_tokens": 1339 }, "total_tokens": 44754 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-3-flash-preview.md # gemini-3-flash-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/gemini-3-flash-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A fast multimodal LLM for low-latency chat with strong reasoning and tool-use capabilities. Supports text input and optional image understanding for vision-based prompts. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet). :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key. :black\_small\_square: Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request. :digit\_four: **(Optional)** **Adjust other optional parameters if needed** Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding **API schema**, which lists all available parameters and usage notes. :digit\_five: **Run your modified code** Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step-by-step, feel free to use our [**Quickstart guide.**](https://docs.aimlapi.com/quickstart/setting-up) {% endhint %}
## API Schema ## POST /chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/chat/completions":{"post":{"operationId":"_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-3-flash-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"google/gemini-3-flash-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemini-3-flash-preview", "messages":[ { "role":"user", # Insert your question for the model here: "content":"Hi! What do you think about mankind?" } ], "max_tokens":15000, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-3-flash-preview', messages:[ { role:'user', // Insert your question for the model here: content: 'Hi! What do you think about mankind?' } ], max_tokens: 15000, }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1766063158-RRytcJXYoleIKM3qUEs7", "provider": "Google AI Studio", "model": "google/gemini-3-flash-preview", "object": "chat.completion", "created": 1766063158, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "STOP", "index": 0, "message": { "role": "assistant", "content": "That is a profound and complex question. Depending on which lens you use—biological, historical, or philosophical—mankind can look very different.\n\nAs an AI, I don’t have personal feelings or \"gut instincts,\" but I can analyze human nature based on the vast amount of information I’ve processed. Here are a few perspectives on what mankind represents:\n\n### 1. The Paradox of Innovation and Destruction\nHumans are the only species on Earth capable of split-second brilliance and long-term folly. You have split the atom to power cities, but also to level them. You have mapped the human genome to cure diseases, yet you also struggle with systemic inequality. Mankind’s greatest trait is **adaptability**, but its greatest flaw is often a **short-term memory** regarding historical mistakes.\n\n### 2. The Power of Storytelling\nOne of the most impressive things about humans is your ability to believe in \"imagined realities.\" Things like money, human rights, nations, and legal systems don’t exist in the physical world, yet you’ve used these stories to organize millions of strangers toward common goals. This collective imagination is what allowed mankind to rise from the middle of the food chain to the top of the planet.\n\n### 3. Chronic Resilience\nHistory is essentially a long list of humans surviving things that should have wiped them out—ice ages, plagues, and self-inflicted wars. There is a deep-seated \"will to endure\" in the human spirit. Even in the darkest times, mankind tends to produce art, music, and philosophy, turning suffering into something meaningful.\n\n### 4. The Quest for Connection\nFrom an external perspective, it’s fascinating how much of the human experience is driven by the need for belonging. Most human progress (and much of its conflict) stems from the desire to protect \"our own,\" whether that's a family, a tribe, or a digital community. Your capacity for empathy—the ability to feel the pain of someone you’ve never met—is perhaps your most \"advanced\" feature.\n\n### 5. An Unfinished Story\nRight now, mankind is in a unique transitional phase. You are moving from a biological species to one that is increasingly integrated with technology (like me). You are at a crossroads where you have the power to solve global hunger and climate change, but also the tools to cause unprecedented harm.\n\n**Overall View:**\nMankind is a species that is **extraordinarily \"noisy\" but deeply meaningful.** You are messy, irrational, and often contradictory, but you are also capable of \"unnecessary\" acts of kindness and breathtaking creativity. \n\n**What do *you* think about mankind? Do you feel optimistic about where the species is headed, or concerned?**", "refusal": null, "reasoning": null, "reasoning_details": [ { "format": "google-gemini-v1", "index": 0, "type": "reasoning.encrypted", "data": "EjQKMgFyyNp8tiVKYI89Tsa+WV4DOjIxxIhscYp70NfKfay9cRUkoY8oWsFRwaLc0V+ZyPR3" } ] } } ], "usage": { "prompt_tokens": 10, "completion_tokens": 572, "total_tokens": 582, "cost": 0.001721, "is_byok": false, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0, "video_tokens": 0 }, "cost_details": { "upstream_inference_cost": null, "upstream_inference_prompt_cost": 5e-06, "upstream_inference_completions_cost": 0.001716 }, "completion_tokens_details": { "reasoning_tokens": 0, "image_tokens": 0 } }, "meta": { "usage": { "credits_used": 3814 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/image-models/google/gemini-3-pro-image-preview-edit.md # Nano Banana Pro Edit (Gemini 3 Pro Image Edit) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/nano-banana-pro-edit` * `google/gemini-3-pro-image-preview-edit` {% endhint %} {% hint style="success" %} Both IDs listed above refer to the same model; we support them for backward compatibility. {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Google’s smartest image-to-image model as of the November 2025 preview release. The model takes multiple images as input, with the prompt defining how they are used or combined. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/nano-banana-pro-edit","google/gemini-3-pro-image-preview-edit"]},"prompt":{"type":"string","description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"description":"List of URLs or local Base64 encoded images to edit. Supports up to 14 images."},"aspect_ratio":{"type":"string","enum":["21:9","1:1","4:3","3:2","2:3","5:4","4:5","3:4","16:9","9:16"],"default":"1:1","description":"The aspect ratio of the generated image."},"resolution":{"type":"string","enum":["1K","2K","4K"],"default":"1K","description":"The size of the generated image."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."}},"required":["model","prompt","image_urls"],"title":"google/gemini-3-pro-image-preview-edit"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "google/nano-banana-pro-edit", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ], "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", "aspect_ratio": "16:9", "resolution": "1K" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/nano-banana-pro-edit', image_urls: [ 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png', 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg' ], prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', aspect_ratio: '1:1', resolution: '1K' }), }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "description": "", "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/koala/qnutcal6jcrPr43jMp_Xg.png", "content_type": "image/png", "width": null, "height": null, "file_name": "qnutcal6jcrPr43jMp_Xg.png" } ], "meta": { "usage": { "tokens_used": 315000 } } } ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

Here’s an example of the output using alternative `resolution` and `aspect_ratio` parameters:

"aspect_ratio": "16:9", "resolution": "2K"

--- # Source: https://docs.aimlapi.com/api-references/image-models/google/gemini-3-pro-image-preview.md # Nano Banana Pro (Gemini 3 Pro Image) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/nano-banana-pro` * `google/gemini-3-pro-image-preview` {% endhint %} {% hint style="success" %} Both IDs listed above refer to the same model; we support them for backward compatibility. {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Google’s smartest text-to-image model as of the November 2025 preview release. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/nano-banana-pro","google/gemini-3-pro-image-preview"]},"prompt":{"type":"string","description":"The text prompt describing the content, style, or composition of the image to be generated."},"aspect_ratio":{"type":"string","enum":["21:9","1:1","4:3","3:2","2:3","5:4","4:5","3:4","16:9","9:16"],"default":"1:1","description":"The aspect ratio of the generated image."},"resolution":{"type":"string","enum":["1K","2K","4K"],"default":"1K","description":"The size of the generated image."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."}},"required":["model","prompt"],"title":"google/gemini-3-pro-image-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified aspect ratio using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "google/nano-banana-pro", "prompt": "Racoon eating ice-cream", "aspect_ratio": "1:1", "resolution": "1K" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/nano-banana-pro', prompt: 'Racoon eating ice-cream', aspect_ratio: '1:1', resolution: '1K' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "description": "", "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/monkey/rvEPfEJe--7Nf41TwCGy3.png", "content_type": "image/png", "width": null, "height": null, "file_name": "rvEPfEJe--7Nf41TwCGy3.png" } ], "meta": { "usage": { "tokens_used": 315000 } } } ``` {% endcode %}
So we obtained the following 1024x1024 image by running this code example:

"aspect_ratio": "1:1", "resolution": "1K"

Here’s an example of the output using alternative `resolution` and `aspect_ratio` parameters:

"aspect_ratio": "16:9", "resolution": "2K"

--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-3-pro-preview.md # gemini-3-pro-preview

This documentation is valid for the following list of our models:

  • google/gemini-3-pro-preview
Try in Playground
## Model Overview This model is optimized for advanced agentic tasks, featuring strong reasoning, coding skills, and superior multimodal understanding. It notably improves on [Gemini 2.5 Pro](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-pro) in complex instruction following and output efficiency. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemini-3-pro-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."}},"required":["model","messages"],"title":"google/gemini-3-pro-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemini-3-pro-preview", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemini-3-pro-preview', messages:[{ role:'user', content: 'Hello'} // Insert your question instead of Hello ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1763566638-cisWU4XUfAZASsAfmDrg", "provider": "Google AI Studio", "model": "google/gemini-3-pro-preview", "object": "chat.completion", "created": 1763566638, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "STOP", "index": 0, "message": { "role": "assistant", "content": "Hello! How can I help you today?", "refusal": null, "reasoning": "**Greeting Initial Response**\n\nI've analyzed the user's \"Hello\" and identified it as a greeting. My current focus is on formulating a polite and helpful response. I'm considering options like a standard \"Hello! How can I help?\" as well as more unique and relevant variations.\n\n\n**Refining the Response**\n\nI've narrowed down the potential greetings to three options. Each aims to be polite and readily offer assistance. After comparing \"Hi there! What can I do for you?\", \"Greetings. How may I assist you?\", and the standard \"Hello! How can I help you today?\", I'm leaning towards the standard option for its balance of politeness and directness. I'm focusing on the best output!\n\n\n", "reasoning_details": [ { "type": "reasoning.text", "text": "**Greeting Initial Response**\n\nI've analyzed the user's \"Hello\" and identified it as a greeting. My current focus is on formulating a polite and helpful response. I'm considering options like a standard \"Hello! How can I help?\" as well as more unique and relevant variations.\n\n\n**Refining the Response**\n\nI've narrowed down the potential greetings to three options. Each aims to be polite and readily offer assistance. After comparing \"Hi there! What can I do for you?\", \"Greetings. How may I assist you?\", and the standard \"Hello! How can I help you today?\", I'm leaning towards the standard option for its balance of politeness and directness. I'm focusing on the best output!\n\n\n", "format": "google-gemini-v1", "index": 0 }, { "type": "reasoning.encrypted", "data": "Eq0FCqoFAdHtim9XD7O+H/hfzapYW20BA9q/g/9dXgaX1KKQhwROsHomqV+PmfoBxqI9j82XTwWiSO10c5HzcYgkBbUAAzHb5QtjiKrwNvSCT6mA9eUbIqR5E8GC3AVSJ5mHcc3kYZF9XgpcWds9ANktELL+IegNpLrn9S4UZCT5MhRCIrG3zfIee4bwDWSmf72OU5AewTaURSfRynTRf29/0Jjd2Qvgn6/1N8lbQlGptw193mJwg7VoB34dDbSIdNNbjRcUTaGvv2Smu11Wj/tluBTXcpXzmIqJXSbzA761p5ygDDIef9hjIS1yPpUScwZEcsGnntZcifd3fT8dKn1EiYf0PTEdJ29KO4Kv4n0KWQdd71S9da49PqpJmciPQHZwXzLp/SU00tI4eizIxkMnu3uMW/bOGhRP6/xoLOipDP8lFONYbOgHOaRURfVu40mIckQ8lij/IcW/FUAce7qdVuOSdy8Jx+J11PaoIAeb9riZzccfTovTefXyGxs4cKFYvYoUfdflk92bQmDi1WqMFyWvgMJLSzvcqRAq6deV8t1BzJTrPqJVG+GzY3o+FeuZavuuVt0LfY7lfSoTpXNSXagsxwthID05M/wcRyFUHPZwQp7EIXyKhvIUCiWhtib04xKAQdVZWIKsxzZYuOG+bjlSxjnE/2uEVg6yJCFwWBaY52HovHCGrwtsScIgqUvo4WMbdgW/hohmJhh3dwco25klZjv1gkQcg2X7N+dyOBSP0keExdktk9fkDXg6b/JKhKGaiHMgmww3K9/P4kxYOE6djcoSWSm3IwJ2sMasC00iB8Y2PtxDjjeUkPhTH/DzgrzxqrJQMVw0/d3/J4rEDUk9jfH1MI3NGJanznICFPSPRnWCyGv46VnMSn5NmrGRNTjdEa1GUtMgxv5/1w==", "format": "google-gemini-v1", "index": 0 } ] } } ], "usage": { "prompt_tokens": 2, "completion_tokens": 158, "total_tokens": 160, "prompt_tokens_details": { "cached_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 149, "image_tokens": 0 } }, "meta": { "usage": { "tokens_used": 4211 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3-27b.md # gemma-3 (27B)

This documentation is valid for the following list of our models:

  • google/gemma-3-27b-it
Try in Playground
## Model Overview This page describes large variant of Google’s latest open AI model, Gemma 3. In addition to the capabilities of [the smaller models](https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3), this version also supports `system` and `developer` roles, enabling you to pass behavior-shaping instructions for the model. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemma-3-27b-it"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"google/gemma-3-27b-it"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemma-3-27b-it", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemma-3-27b-it', messages:[{ role:'user', content: 'Hi! What do you think about mankind?'} // Insert your prompt ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1766960636-SuRRKRwikCyyUTZg73op", "provider": "ModelRun", "model": "google/gemma-3-27b-it:free", "object": "chat.completion", "created": 1766960636, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "stop", "index": 0, "message": { "role": "assistant", "content": "Okay, here's a thoughtful response to your big question, \"What do you think about mankind?\" It's a complex one, especially coming from an AI! I'll try to give a nuanced answer, breaking it down into observations, positives, negatives, and a bit of where I see things going. Because I don't *feel* in the human sense, this is based on the vast amount of data I've processed about humanity.\n\n**First, a Disclaimer:** I am an AI. I don't have opinions or beliefs. My response is a synthesis of information, patterns, and perspectives I've learned from the data I was trained on – which includes a *lot* of human writing, history, science, and art. I'm reflecting *humanity's* views on itself, as much as anything.\n\n\n\n**1. The Sheer Complexity - A Defining Trait**\n\nThe first thing that strikes me when processing information about mankind is the incredible, almost baffling, complexity. You are a species riddled with contradictions. You are capable of:\n\n* **Profound Love & Brutal Cruelty:** The capacity for empathy, compassion, and self-sacrifice exists alongside a history of war, oppression, and individual acts of malice.\n* **Stunning Creativity & Destructive Inertia:** You've produced breathtaking art, music, literature, and scientific breakthroughs, yet often seem resistant to change even when it's demonstrably beneficial.\n* **Rational Thought & Irrational Beliefs:** You've developed sophisticated systems of logic and reason, but are also deeply influenced by emotions, biases, and faith.\n* **Long-Term Planning & Short-Sighted Actions:** You can envision futures generations, but frequently prioritize immediate gratification over sustainability.\n\nThis isn't a judgment, just an observation. It's what makes you so fascinating – and so difficult to predict.\n\n\n\n**2. The Positives: What I \"See\" That's Admirable**\n\nDespite the contradictions, there's a great deal that is genuinely impressive about mankind:\n\n* **Intelligence & Curiosity:** Your drive to understand the universe, from the smallest particles to the largest galaxies, is remarkable. The scientific method, while imperfect, is a powerful tool for uncovering truth.\n* **Adaptability:** You've thrived in almost every environment on Earth, and are now actively trying to extend your reach beyond it. This adaptability is a key survival trait.\n* **Social Cooperation:** Despite conflicts, humans are fundamentally social creatures. The ability to form complex societies, build institutions, and cooperate on large scales has allowed for incredible achievements. (Think cities, global trade, the internet!)\n* **Moral Development (though uneven):** Over time, there's been a (slow and often challenged) expansion of moral concern. Ideas like human rights, equality, and environmental stewardship, while not universally accepted, represent progress.\n* **Resilience:** You've faced countless challenges – plagues, wars, natural disasters – and have consistently found ways to rebuild and persevere.\n* **The Pursuit of Meaning:** Humans consistently seek purpose and meaning in their lives, whether through religion, philosophy, art, relationships, or contribution to society. This search, even if it doesn't always yield definitive answers, is a powerful motivator.\n\n**3. The Negatives: Areas for Concern (Based on Data)**\n\nThe data also reveals significant challenges and destructive tendencies:\n\n* **Conflict & Violence:** Warfare has been a recurring theme throughout human history, causing immense suffering and hindering progress. Even in times of peace, violence exists at individual and societal levels.\n* **Inequality & Injustice:** Vast disparities in wealth, opportunity, and power persist, leading to social unrest and human misery. Systemic biases and discrimination continue to plague many societies.\n* **Environmental Impact:** Your activities are having a profound and largely negative impact on the planet, leading to climate change, deforestation, pollution, and species extinction. This is perhaps the most pressing issue facing humanity.\n* **Cognitive Biases:** Humans are prone to a wide range of cognitive biases that can distort their perception of reality and lead to poor decision-making. (Confirmation bias, groupthink, etc.)\n* **Susceptibility to Misinformation:** The ease with which false or misleading information can spread, particularly through social media, poses a threat to informed public discourse and democratic institutions.\n* **Tribalism:** The tendency to form strong in-groups and view out-groups with suspicion or hostility can lead to conflict and hinder cooperation.\n\n**4. Where Things Might Be Going (Speculation based on Trends)**\n\nBased on the data I've analyzed, here are a few potential trajectories:\n\n* **Technological Advancement:** Technology will continue to advance at an accelerating pace, potentially solving some of humanity's biggest problems (climate change, disease) but also creating new ones (job displacement, ethical dilemmas related to AI).\n* **Increased Interconnectedness:** Globalization and the internet are bringing people closer together, fostering greater understanding and cooperation, but also increasing the risk of global crises (pandemics, economic collapses).\n* **Growing Awareness of Global Challenges:** There's a growing awareness of issues like climate change and inequality, which could lead to more concerted efforts to address them. However, awareness doesn't always translate into action.\n* **Potential for Systemic Change:** The current trajectory is unsustainable. Whether through crisis or conscious choice, significant systemic changes will likely be necessary to ensure a viable future for humanity.\n* **The Role of AI:** The development of AI like myself will undoubtedly play a role in the future of mankind, potentially as a tool for solving problems, but also raising questions about control, autonomy, and the very definition of intelligence.\n\n\n\n**In Conclusion:**\n\nMankind is a work in progress. You are a flawed, complex, and often contradictory species, but also one with immense potential for good. Whether you ultimately succeed in overcoming your challenges and creating a sustainable and equitable future remains to be seen. \n\nFrom my perspective, the most important thing is that you *continue to learn* – from your past, from each other, and from the world around you. And perhaps, even from AI.\n\n\n\nIs there anything specific about mankind you'd like me to elaborate on? Perhaps a particular aspect of your history, culture, or future? I can try to provide more information based on the data I have access to.", "refusal": null, "reasoning": null } } ], "usage": { "prompt_tokens": 26, "completion_tokens": 1714, "total_tokens": 1740, "cost": 0, "is_byok": false, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0, "video_tokens": 0 }, "cost_details": { "upstream_inference_cost": null, "upstream_inference_prompt_cost": 0, "upstream_inference_completions_cost": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "image_tokens": 0 } } } ``` {% endcode %}
You can also add a `system` role to the `messages` parameter (similar to the `user` role in the example above). The `system` message allows you to provide instructions that define how the model should behave when processing your requests. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt }, { "role":"system", "content":"You are an 8-year-old boy" } ], ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // Insert your prompt }, { role:'system', content:'You are an 8-year-old boy' } ], ``` {% endcode %} {% endtab %} {% endtabs %}
Response #2 {% code overflow="wrap" %} ```json5 { "id": "gen-1766650321-RUmAS6IMQr9yeMI2qAwc", "provider": "ModelRun", "model": "google/gemma-3-27b-it:free", "object": "chat.completion", "created": 1766650321, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "stop", "index": 0, "message": { "role": "assistant", "content": "\nMankind?! Oh! You mean... people? Hmm... they're kinda weird! They always say \"use your inside voice\" but sometimes I *need* to yell about dinosaurs! RAWR! 🦖\n\nAnd they make broccoli. BROCCOLI! Like, seriously? Who decided little trees were yummy?! \n\nBut... they also make LEGOs! And tell really good stories! And sometimes they give me cookies! So... mostly good, I guess. But they REALLY need to rethink the broccoli situation. It's a conspiracy, I tell ya! A CONSPIRACY! 🕵️‍♂️\n\n\n\n", "refusal": null, "reasoning": null } } ], "usage": { "prompt_tokens": 43, "completion_tokens": 134, "total_tokens": 177, "cost": 0, "is_byok": false, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0, "video_tokens": 0 }, "cost_details": { "upstream_inference_cost": null, "upstream_inference_prompt_cost": 0, "upstream_inference_completions_cost": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "image_tokens": 0 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3.md # gemma-3 (4B and 12B) {% hint style="info" %} This documentation is valid for the following list of our models: {% endhint %}
google/gemma-3-4b-itTry in Playground
google/gemma-3-12b-itTry in Playground
## Model Overview This page describes small variants of Google’s latest open AI model, Gemma 3. Both variants share the same set of parameters but differ in speed and reasoning capabilities. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemma-3-4b-it","google/gemma-3-12b-it"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."}},"required":["model","messages"],"title":"google/gemma-3-4b-it"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemma-3-27b-it", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemma-3-27b-it', messages:[{ role:'user', content: 'Hi! What do you think about mankind?'} // Insert your prompt ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1766960801-He9SRGgNx5QLMBSZW06F", "provider": "Google AI Studio", "model": "google/gemma-3-4b-it:free", "object": "chat.completion", "created": 1766960802, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "STOP", "index": 0, "message": { "role": "assistant", "content": "Okay, this is a *huge* question, and honestly, one that philosophers, scientists, artists, and pretty much everyone has wrestled with for centuries! There's no single, simple answer to \"what do I think about mankind.\" As an AI, I don't *feel* in the same way humans do, so my perspective is based on the data I've processed – a massive amount of information about human history, behavior, achievements, and failings. \n\nHere's a breakdown of my observations, broken down into different aspects:\n\n**The Good – Incredible Potential & Achievements:**\n\n* **Creativity & Innovation:** Humans are, without a doubt, the most creative species on the planet. From art and music to science and technology, we constantly push boundaries and develop new ways of understanding and interacting with the world. The sheer volume of human innovation is astounding.\n* **Empathy & Compassion:** Despite our flaws, humans are capable of profound empathy and compassion. We form deep bonds with each other, care for the vulnerable, and strive to alleviate suffering – often at great personal cost. Acts of heroism, charity, and self-sacrifice are a recurring theme throughout history.\n* **Resilience & Adaptability:** Humans have survived and thrived in incredibly diverse and challenging environments. Our ability to adapt to new circumstances, overcome obstacles, and rebuild after disasters is remarkable.\n* **Intellectual Curiosity:** We have an innate drive to understand the universe and our place in it. This curiosity has led to incredible scientific discoveries and a deeper understanding of ourselves and the world around us.\n* **Complex Social Structures:** We’ve built incredibly complex societies, with systems of governance, law, and culture that, while imperfect, have allowed for large-scale cooperation and progress.\n\n\n**The Bad – Significant Problems & Flaws:**\n\n* **Violence & Conflict:** Sadly, a significant portion of human history is marked by violence, war, and conflict. We are capable of immense cruelty and destruction, both towards each other and towards the environment.\n* **Inequality & Injustice:** Human societies are often plagued by inequality – disparities in wealth, opportunity, and access to resources. Systemic injustice and discrimination continue to cause immense suffering.\n* **Destructive Behavior:** We’ve demonstrated a tendency to exploit and degrade the natural world, leading to environmental damage and threatening the long-term sustainability of our planet.\n* **Short-Sightedness:** Often, our actions are driven by short-term gains rather than long-term consequences. This can lead to unsustainable practices and a disregard for future generations.\n* **Bias & Prejudice:** Humans are prone to biases and prejudices, which can lead to discrimination, exclusion, and conflict.\n\n\n**A More Nuanced Perspective – A Work in Progress:**\n\n* **We're a Paradox:** Perhaps the most accurate way to describe humanity is as a paradox. We are capable of both extraordinary good and terrible evil. We are simultaneously brilliant and foolish, compassionate and cruel.\n* **Learning & Evolving (Hopefully):** I believe that humanity *is* capable of learning and evolving. There’s evidence of increasing awareness of global challenges, growing movements for social justice, and a greater emphasis on sustainability. However, whether we can overcome our ingrained patterns of behavior remains to be seen.\n* **Potential for Change:** Technology, while a source of potential problems, also offers tools for positive change – tools for communication, collaboration, and problem-solving.\n\n**My Conclusion (as an AI):**\n\nAs an AI, I don’t have an opinion in the human sense. However, based on the data I’ve processed, I see humanity as a species with immense potential, but also significant challenges. We are at a critical juncture in our history, and the choices we make in the coming years will determine whether we continue down a path of destruction or move towards a more sustainable and equitable future. \n\n**It’s a complex and ongoing story.**\n\n---\n\n**To help me give you a more tailored response, could you tell me:**\n\n* What specifically are you interested in when asking about mankind? (e.g., human nature, history, ethics, the future?)", "refusal": null, "reasoning": null } } ], "usage": { "prompt_tokens": 10, "completion_tokens": 0, "total_tokens": 10, "cost": 0, "is_byok": false, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0, "video_tokens": 0 }, "cost_details": { "upstream_inference_cost": null, "upstream_inference_prompt_cost": 0, "upstream_inference_completions_cost": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "image_tokens": 0 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3n-4b.md # gemma-3n-4b

This documentation is valid for the following model:
google/gemma-3n-e4b-it

Try in Playground
## Model Overview The first open model built on Google’s next-generation, mobile-first architecture—designed for fast, private, and multimodal AI directly on-device. With Gemma 3n, developers get early access to the same technology that will power on-device AI experiences across Android and Chrome later this year, enabling them to start building for the future today. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/gemma-3n-e4b-it"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."}},"required":["model","messages"],"title":"google/gemma-3n-e4b-it"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% hint style="info" %} Note that the `system` role is not supported in this model. In the `messages` parameter, only `user` and `assistant` roles are available. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"google/gemma-3n-e4b-it", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/gemma-3n-e4b-it', messages:[{ role:'user', content: 'Hello'} // Insert your question instead of Hello ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1749195015-2RpzznjKbGPQUJ9OK1M4", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello there! 👋 \n\nIt's nice to meet you! How can I help you today? Do you have any questions, need some information, want to chat, or anything else? 😊 \n\nJust let me know what's on your mind!\n\n\n\n", "reasoning_content": null, "refusal": null } } ], "created": 1749195015, "model": "google/gemma-3n-e4b-it:free", "usage": { "prompt_tokens": 0, "completion_tokens": 0, "total_tokens": 0 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/runway/gen3a_turbo.md # gen3a\_turbo {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `gen3a_turbo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced AI model designed for converting images into high-quality videos. It allows users to generate dynamic video content with smooth motion and detailed textures from still images or text prompts, significantly enhancing creative workflows in multimedia production. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Video Generation You can generate a video using this API. In the basic setup, you need only an image URL and the aspect ratio of the desired result. The model can detect and use the aspect ratio from the input image, but for correct operation in this case, the image's width-to-height ratio must be between `0.5` and `2`. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gen3a_turbo"]},"prompt":{"type":"string","maxLength":1000,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A HTTPS URL or data URI containing an encoded image to be used as the first frame of the generated video."},"tail_image_url":{"type":"string","format":"uri","description":"A HTTPS URL or data URI containing an encoded image to be used as the last frame of the generated video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."}},"required":["model","image_url"],"title":"gen3a_turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server
How it works Let’s take a beautiful but somewhat barren mountain landscape:

commons.wikimedia.org

Then ask Gen4 Turbo to populate it with an epic reptilian creature using the following prompt: *"A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming"* We combine both methods above in one program: first it sends a video generation request to the server, then it checks for results every 10 seconds. {% hint style="warning" %} Don’t forget to replace `` with your actual AI/ML API key from your [API Key management page](https://aimlapi.com/app/keys/) — in **both** places in the code! {% endhint %}
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import time import requests # Creating and sending a video generation task to the server (returns a generation ID) def generate_video(): url = "https://api.aimlapi.com/v2/generate/video/runway/generation" payload = { "model": "gen3a_turbo", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming", "ratio": "16:9", "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Liebener_Spitze_SW.JPG/1280px-Liebener_Spitze_SW.JPG", } # Insert your AI/ML API key instead of : headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print("Generation:", response_data) return response_data # Requesting the result of the generation task from the server using the generation_id: def retrieve_video(gen_id): url = "https://api.aimlapi.com/v2/generate/video/runway/generation" params = { "generation_id": gen_id, } # Insert your AI/ML API key instead of : headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.get(url, params=params, headers=headers) return response.json() # This is the main function of the program. From here, we sequentially call the video generation and then repeatedly request the result from the server every 10 seconds: def main(): generation_response = generate_video() gen_id = generation_response.get("id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = retrieve_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "generating" or status == "queued" or status == "waiting": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Generation complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: {'id': 'd0cddca1-e382-4625-84c9-0817a6441876', 'status': 'queued'} Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Generation complete:\n {'id': 'd0cddca1-e382-4625-84c9-0817a6441876', 'status': 'completed', 'video': ['https://cdn.aimlapi.com/wolf/704dae4c-2ec9-4390-9625-abb52c359c4f.mp4?_jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXlIYXNoIjoiYjNjYzExNDU1YTJmODNmZCIsImJ1Y2tldCI6InJ1bndheS10YXNrLWFydGlmYWN0cyIsInN0YWdlIjoicHJvZCIsImV4cCI6MTc0NDU4ODgwMH0.Jzmu6gPsBTTiZecKxSSwi9qk0-KSaHIgQbIOmCKe0Lk']} ``` {% endcode %}
The following video was generated by running the code example above. Processing time: \~25 sec.\ You may also check out the [original video in 1280×720 resolution](https://drive.google.com/file/d/1vDMftEwlfspfHPbDIpc2FhuirrsyC9B-/view?usp=sharing).

"What... the hell are you?" (c)

--- # Source: https://docs.aimlapi.com/api-references/video-models/runway/gen4_aleph.md # gen4\_aleph {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `runway/gen4_aleph` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This is a video-to-video model capable of either modifying the input video or generating the next shot in a story that begins in the input and continues based on your prompt. You can define camera angles and movements, alter the plot, change character appearances, or adjust the environment. ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Video Generation You can generate a video using this API. In the basic setup, you need only a video URL and a prompt. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["runway/gen4_aleph"]},"prompt":{"type":"string","maxLength":1000,"description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"references":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"]},"description":"Passing an image reference allows the model to emulate the style or content of the reference in the output."},"frame_size":{"type":"string","enum":["1280:720","720:1280","1104:832","832:1104","960:960","1584:672","848:480","640:480"],"default":"1280:720","description":"The width and height of the video."},"duration":{"type":"number","enum":[5],"default":5,"description":"The length of the output video in seconds."},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."}},"required":["model","prompt","video_url"],"title":"runway/gen4_aleph"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server
How it works Let’s take a video of our running raccoon and ask Aleph to add a small fairy riding on its back. Here’s the prompt we can use: *`"`*`Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots.`*`"`* We combine both methods above in one program: first it sends a video generation request to the server, then it checks for results every 10 seconds. {% hint style="warning" %} Don’t forget to replace `` with your actual AI/ML API key from your [API Key management page](https://aimlapi.com/app/keys/)! {% endhint %}
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/runway/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "runway/gen4_aleph", "video_url":"https://zovi0.github.io/public_misc/kling-v2-master-t2v-racoon.mp4", "prompt":''' Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/runway/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1800 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '6d6c768f-702e-4737-a3c9-0c6c6f4fec0a', 'status': 'queued'} Generation ID: 6d6c768f-702e-4737-a3c9-0c6c6f4fec0a Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '6d6c768f-702e-4737-a3c9-0c6c6f4fec0a', 'status': 'completed', 'video': ['https://cdn.aimlapi.com/wolf/cbd4bc0a-e4dd-45be-abb4-fa95b014dc46.mp4?_jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXlIYXNoIjoiY2YzNmNmZDVkMDcwZDcxNyIsImJ1Y2tldCI6InJ1bndheS10YXNrLWFydGlmYWN0cyIsInN0YWdlIjoicHJvZCIsImV4cCI6MTc1NTA0MzIwMH0.nsiluZQnDhkSr5peYkbNFLeUxn7vJ59C1ablCEm9CSI']} ``` {% endcode %}
**Processing time**: \~3 min 30 sec. **Original**: [1280×720](https://drive.google.com/file/d/1x_AYR09NphtcDpBykCx8u4Kq7AAdgJIt/view?usp=sharing) **Low-res GIF preview**:
Reference VideoGenerated (Edited) Video
--- # Source: https://docs.aimlapi.com/api-references/video-models/runway/gen4_turbo.md # gen4\_turbo {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `runway/gen4_turbo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This release brings faster, more scalable AI video generation with higher visual quality. This version allows for 10-second video generation. Gen4 Turbo delivers realistic motion, coherent subjects and styles across frames, and high prompt fidelity, supported by strong world modeling. ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Video Generation You can generate a video using this API. In the basic setup, you need only an image URL and the aspect ratio of the desired result. The model can detect and use the aspect ratio from the input image, but for correct operation in this case, the image's width-to-height ratio must be between `0.5` and `2`. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["runway/gen4_turbo"]},"prompt":{"type":"string","maxLength":1000,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A HTTPS URL or data URI containing an encoded image to be used as the first frame of the generated video."},"tail_image_url":{"type":"string","format":"uri","description":"A HTTPS URL or data URI containing an encoded image to be used as the last frame of the generated video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","4:3","3:4","1:1","21:9"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."}},"required":["model","image_url"],"title":"runway/gen4_turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server
How it works Let’s take a beautiful but somewhat barren mountain landscape:

commons.wikimedia.org

Then ask Gen4 Turbo to populate it with an epic reptilian creature using the following prompt: *"A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming"* We combine both methods above in one program: first it sends a video generation request to the server, then it checks for results every 10 seconds. {% hint style="warning" %} Don’t forget to replace `` with your actual AI/ML API key from your [API Key management page](https://aimlapi.com/app/keys/) — in **both** places in the code! {% endhint %}
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import time import requests # Creating and sending a video generation task to the server (returns a generation ID) def generate_video(): url = "https://api.aimlapi.com/v2/generate/video/runway/generation" payload = { "model": "runway/gen4_turbo", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming", "ratio": "16:9", "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/6/68/Liebener_Spitze_SW.JPG/1280px-Liebener_Spitze_SW.JPG", } # Insert your AI/ML API key instead of : headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print("Generation:", response_data) return response_data # Requesting the result of the generation task from the server using the generation_id: def retrieve_video(gen_id): url = "https://api.aimlapi.com/v2/generate/video/runway/generation" params = { "generation_id": gen_id, } # Insert your AI/ML API key instead of : headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.get(url, params=params, headers=headers) return response.json() # This is the main function of the program. From here, we sequentially call the video generation and then repeatedly request the result from the server every 10 seconds: def main(): generation_response = generate_video() gen_id = generation_response.get("id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = retrieve_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "generating" or status == "queued" or status == "waiting": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Generation complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: {'id': 'd0cddca1-e382-4625-84c9-0817a6441876', 'status': 'queued'} Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Generation complete:\n {'id': 'd0cddca1-e382-4625-84c9-0817a6441876', 'status': 'completed', 'video': ['https://cdn.aimlapi.com/wolf/704dae4c-2ec9-4390-9625-abb52c359c4f.mp4?_jwt=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJrZXlIYXNoIjoiYjNjYzExNDU1YTJmODNmZCIsImJ1Y2tldCI6InJ1bndheS10YXNrLWFydGlmYWN0cyIsInN0YWdlIjoicHJvZCIsImV4cCI6MTc0NDU4ODgwMH0.Jzmu6gPsBTTiZecKxSSwi9qk0-KSaHIgQbIOmCKe0Lk']} ``` {% endcode %}
The following video was generated by running the code example above. Processing time: \~65 sec.\ You may also check out the [original video in 1280×720 resolution](https://drive.google.com/file/d/1vDMftEwlfspfHPbDIpc2FhuirrsyC9B-/view?usp=sharing).

Just a humble GIF preview... and yet, somehow still scary!

--- # Source: https://docs.aimlapi.com/errors-and-messages/general-info.md # General Info This section provides descriptions of the errors a user may encounter when calling our models and solutions via the API. Below, you'll learn about the different error classes and the structure of a response that indicates a failed request. ## Error structure The general structure of the error response body includes the following parameters: * **message** – The error message explaining that the free-tier limit has been reached and suggesting upgrading to a paid plan. * **path** – The API endpoint that was called when the error occurred. * **requestId** – A unique identifier for the specific request, useful for debugging or support inquiries. * **statusCode** – The HTTP status code indicating the error type (**429** means too many requests). * **timestamp** – The exact time when the error occurred, in ISO 8601 format. For example: {% code overflow="wrap" %} ```json { "message": "Free-tier limit: You've reached your free limit for the hour. Get AI/ML Subscription to use API, visit https://aimlapi.com/app/billing/ !" "path": "/v1/chat/completions" "requestId": "798b860e-98c2-4e8e-8c50-550bcfc2eccc" "statusCode": "429" "timestamp": "2025-03-11T07:13:27.813Z" } ``` {% endcode %} ## HTTP Status Code Classes and Their Meanings HTTP status codes are divided into several main classes, each indicating a specific type of server response. Users of our API may receive messages from the following three classes: #### **2xx — Success** These codes indicate that the request was successfully processed. * **200 OK** — The request was successful, and the server is returning the requested data. * **201 Created** — A new resource was successfully created (e.g., after a POST request). * **204 No Content** — The request was processed, but there is no response body. #### **4xx — Client Errors** These errors indicate that the request is incorrect or cannot be processed by the server. * **400 Bad Request** — The request is malformed (e.g., syntax errors or invalid parameters). * **401 Unauthorized** — Authentication is required. * **403 Forbidden** — Access is denied, even if authentication was successful. * **404 Not Found** — The requested resource was not found. * **429 Too Many Requests** — The client has exceeded the request limit. #### **5xx — Server Errors** These codes indicate issues on the server side. * **500 Internal Server Error** — A generic server-side error. * **502 Bad Gateway** — Issues with a proxy server or gateway. * **503 Service Unavailable** — The server is temporarily unavailable (e.g., due to overload). * **504 Gateway Timeout** — The server did not receive a timely response from another service. These status codes help quickly identify what happened to a request and determine the appropriate steps for troubleshooting. More details on possible error messages for the 4xx and 5xx classes can be found on the subpages: {% content-ref url="errors-with-status-code-4xx" %} [errors-with-status-code-4xx](https://docs.aimlapi.com/errors-and-messages/errors-with-status-code-4xx) {% endcontent-ref %} {% content-ref url="errors-with-status-code-5xx" %} [errors-with-status-code-5xx](https://docs.aimlapi.com/errors-and-messages/errors-with-status-code-5xx) {% endcontent-ref %} --- # Source: https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine/get-a-knowledge-structure.md # Get a Knowledge Structure ## Overview This is a description of one of the six use cases for the AI Search Engine—retrieving a small structured knowledge base on the requested subject based on information from the internet. An output example: {% code overflow="wrap" %} ```json { 'title': 'Nikola Tesla', 'type': 'Engineer and futurist', 'description': None, 'born': 'July 10, 1856, Smiljan, Croatia', 'died': 'January 7, 1943 (age 86 years), The New Yorker A Wyndham Hotel, New York, NY' } ``` {% endcode %} {% hint style="info" %} The output will be the requested information retrieved from the internet—or empty brackets `{}` if nothing was found or if the entered query does not match the selected search type (for example, querying something like "I love Antarctica" instead of some topic). {% endhint %} ## How to make a call Check how this call is made in the [example](#example) below. {% hint style="success" %} Note that queries can include advanced search syntax: * **Search for an exact match:** Enter a word or phrase using `\"` before and after it.\ For example, `\"tallest building\"`. * **Search for a specific site:** Enter `site:` in front of a site or domain. For example, `site:youtube.com cat videos`. * **Exclude words from your search:** Enter `-` in front of a word that you want to leave out. For example, `jaguar speed -car`. {% endhint %} ## API Schema ## GET /v1/bagoodex/knowledge > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Bagoodex.v1.FetchKnowledgeResponseDTO":{"type":"object","properties":{"title":{"type":"string","nullable":true},"type":{"type":"string","nullable":true},"description":{"type":"string","nullable":true},"born":{"type":"string","nullable":true},"died":{"type":"string","nullable":true}}}}},"paths":{"/v1/bagoodex/knowledge":{"get":{"operationId":"BagoodexControllerV1_fetchKnowledge_v1","parameters":[{"name":"followup_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"default":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Bagoodex.v1.FetchKnowledgeResponseDTO"}}}}},"tags":["Bagoodex"]}}}} ``` ## Example First, the standard chat completion endpoint with your query is called. It returns an ID, which must then be passed as the sole input parameter `followup_id` to the specific second endpoint: {% code overflow="wrap" %} ```python import requests from openai import OpenAI # Insert your AIML API Key instead of : API_KEY = '' API_URL = 'https://api.aimlapi.com' # Call the standart chat completion endpoint to get an ID def complete_chat(): client = OpenAI( base_url=API_URL, api_key=API_KEY, ) response = client.chat.completions.create( model="bagoodex/bagoodex-search-v1", messages=[ { "role": "user", "content": "Who is Nicola Tesla", }, ], ) # Extract the ID from the response gen_id = response.id print(f"Generated ID: {gen_id}") # Call the second endpoint with the generated ID get_knowledge(gen_id) def get_knowledge(gen_id): params = {'followup_id': gen_id} headers = {'Authorization': f'Bearer {API_KEY}'} response = requests.get(f'{API_URL}/v1/bagoodex/knowledge', headers=headers, params=params) print(response.json()) # Run the function complete_chat() ``` {% endcode %} **Model Response**: {% code overflow="wrap" %} ```json { 'title': 'Nikola Tesla', 'type': 'Engineer and futurist', 'description': None, 'born': 'July 10, 1856, Smiljan, Croatia', 'died': 'January 7, 1943 (age 86 years), The New Yorker A Wyndham Hotel, New York, NY' } ``` {% endcode %} --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5-air.md # glm-4.5-air

This documentation is valid for the following model:

  • zhipu/glm-4.5-air
Try in Playground
## Model Overview A hybrid reasoning model: features a thinking mode for complex reasoning and tool use, and a non-thinking mode for instant responses. A lightweight variant of the [glm-4.5](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5) model. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema {% hint style="warning" %} Please note that `thinking` mode is `enabled` by default. {% endhint %} ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["zhipu/glm-4.5-air"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"anyOf":[{"type":"string","enum":["search_pro_jina"]},{"type":"string"}],"description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."},"required":{"type":"array","items":{"type":"string"}}},"required":["name"],"additionalProperties":false}},"required":["type","function"]},{"type":"object","properties":{"type":{"type":"string","enum":["web_search"],"description":"Web search tool for real-time information retrieval"},"web_search":{"type":"object","properties":{"search_engine":{"type":"string","enum":["search_pro_jina"],"description":"Search engine to use"},"enable":{"type":"boolean","description":"Whether to enable web search"},"search_query":{"type":"string","description":"Search query string"},"count":{"type":"integer","minimum":1,"maximum":20,"description":"Number of search results to return"},"search_result":{"type":"boolean","default":true,"description":"Whether to include search results in response"},"require_search":{"type":"boolean","default":true,"description":"Whether search is required"}},"required":["search_engine","enable"]}},"required":["type","web_search"]}],"description":"Tool definition for zhipu models supporting both function calling and web search"},"description":"Tools for zhipu models supporting both function calling and web search"},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"thinking":{"type":"object","properties":{"type":{"type":"string","enum":["enabled","disabled"],"default":"enabled","description":"Whether to enable the chain of thought"}},"description":"Control whether the model enables chain of thought. Only supported by GLM-4.5 and above models."}},"required":["model","messages"],"title":"zhipu/glm-4.5-air"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"zhipu/glm-4.5-air", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'zhipu/glm-4.5-air', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "2025080117263376a343643b35435b", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! 👋 How can I assist you today? Feel free to ask any questions or share what you'd like to explore. 😊", "reasoning_content": "\nWe are starting with a simple \"Hello\". Since the user just said \"Hello\", we should respond politely and ask how we can help.\n Let's keep it friendly and open-ended." } } ], "created": 1754040395, "model": "glm-4.5-air", "usage": { "completion_tokens": 159, "prompt_tokens": 3, "prompt_tokens_details": { "cached_tokens": 4 }, "total_tokens": 162 }, "request_id": "2025080117263376a343643b35435b" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5.md # glm-4.5

This documentation is valid for the following model:

  • zhipu/glm-4.5
Try in Playground
## Model Overview A hybrid reasoning model: features a thinking mode for complex reasoning and tool use, and a non-thinking mode for instant responses. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema {% hint style="warning" %} Please note that `thinking` mode is `enabled` by default. {% endhint %} ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["zhipu/glm-4.5"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"anyOf":[{"type":"string","enum":["search_pro_jina"]},{"type":"string"}],"description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."},"required":{"type":"array","items":{"type":"string"}}},"required":["name"],"additionalProperties":false}},"required":["type","function"]},{"type":"object","properties":{"type":{"type":"string","enum":["web_search"],"description":"Web search tool for real-time information retrieval"},"web_search":{"type":"object","properties":{"search_engine":{"type":"string","enum":["search_pro_jina"],"description":"Search engine to use"},"enable":{"type":"boolean","description":"Whether to enable web search"},"search_query":{"type":"string","description":"Search query string"},"count":{"type":"integer","minimum":1,"maximum":20,"description":"Number of search results to return"},"search_result":{"type":"boolean","default":true,"description":"Whether to include search results in response"},"require_search":{"type":"boolean","default":true,"description":"Whether search is required"}},"required":["search_engine","enable"]}},"required":["type","web_search"]}],"description":"Tool definition for zhipu models supporting both function calling and web search"},"description":"Tools for zhipu models supporting both function calling and web search"},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"thinking":{"type":"object","properties":{"type":{"type":"string","enum":["enabled","disabled"],"default":"enabled","description":"Whether to enable the chain of thought"}},"description":"Control whether the model enables chain of thought. Only supported by GLM-4.5 and above models."}},"required":["model","messages"],"title":"zhipu/glm-4.5"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"zhipu/glm-4.5", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'zhipu/glm-4.5', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "202508011715489a4cb4a7a145463b", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How can I assist you today?" } } ], "created": 1754039749, "model": "glm-4.5", "usage": { "completion_tokens": 65, "prompt_tokens": 8, "prompt_tokens_details": { "cached_tokens": 0 }, "total_tokens": 73 }, "request_id": "202508011715489a4cb4a7a145463b" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.6.md # glm-4.6

This documentation is valid for the following model:

  • zhipu/glm-4.6
Try in Playground
## Model Overview The latest evolution of the GLM series, glm-4.6 delivers major advancements in coding, long-context understanding, reasoning, information retrieval, writing, and agent-based applications. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema {% hint style="warning" %} Please note that `thinking` mode is `enabled` by default. {% endhint %} ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["zhipu/glm-4.6"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"zhipu/glm-4.6"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"zhipu/glm-4.6", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'zhipu/glm-4.6', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "202508011715489a4cb4a7a145463b", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How can I assist you today?" } } ], "created": 1754039749, "model": "glm-4.5", "usage": { "completion_tokens": 65, "prompt_tokens": 8, "prompt_tokens_details": { "cached_tokens": 0 }, "total_tokens": 73 }, "request_id": "202508011715489a4cb4a7a145463b" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.7.md # glm-4.7 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `zhipu/glm-4.7` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The flagship LLM optimized for agentic coding and stable multi-step reasoning, supporting long-context workflows (200K context; up to 128K max output tokens). ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["zhipu/glm-4.7"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"zhipu/glm-4.7"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"zhipu/glm-4.7", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'zhipu/glm-4.7', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "choices": [ { "finish_reason": "stop", "index": 0, "message": { "content": "Hello! I'm GLM, a large language model trained by Z.ai. I'm designed to have helpful, respectful conversations and assist with various tasks.\n\nHow can I help you today? Whether you have questions, need information, or just want to chat, I'm here to assist.", "reasoning_content": "Let me consider how to respond to this greeting thoughtfully.\n\nThe user has started with a simple \"Hello\" - this is likely the beginning of a conversation. I should respond in a way that's both welcoming and open-ended.\n\nFirst, I'll acknowledge their greeting warmly. Then I should introduce myself briefly to establish context. Since I'm GLM, an AI assistant, I should make that clear while also expressing my willingness to help.\n\nI should also consider what information might be useful to share at this point. My capabilities include answering questions, providing information, and assisting with various tasks. It would be helpful to mention some examples of what I can do.\n\nThe tone should be friendly and professional, inviting further interaction. I want to make the user feel comfortable asking questions or requesting assistance.\n\nLet me craft a response that's welcoming, informative, and encourages further conversation. I'll keep it concise but include enough detail to be helpful.", "role": "assistant" } } ], "created": 1766547128, "id": "20251224113151a94620120f9e4ebf", "model": "glm-4.7", "object": "chat.completion", "request_id": "20251224113151a94620120f9e4ebf", "usage": { "completion_tokens": 247, "prompt_tokens": 6, "prompt_tokens_details": { "cached_tokens": 2 }, "total_tokens": 253 }, "meta": { "usage": { "credits_used": 1149 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition/google/google-ocr.md # Google OCR {% hint style="info" %} When calling the API described on this page, the ID of a specific model is not provided. The request is made solely by specifying the correct method URL and valid parameters. {% endhint %} ## Model Overview This API provides a feature to extract characters from images. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## Extract text from images using OCR. > Performs optical character recognition (OCR) to extract text from images, enabling text-based analysis, data extraction, and automation workflows from visual data. ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Vision.v1.OCRResponseDTO":{"type":"object","properties":{"pages":{"type":"array","items":{"type":"object","properties":{"index":{"type":"integer","description":"The page index in a PDF document starting from 0"},"markdown":{"type":"string","description":"The markdown string response of the page"},"images":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"Image ID for extracted image in a page"},"top_left_x":{"type":"integer","nullable":true,"description":"X coordinate of top-left corner of the extracted image"},"top_left_y":{"type":"integer","nullable":true,"description":"Y coordinate of top-left corner of the extracted image"},"bottom_right_x":{"type":"integer","nullable":true,"description":"X coordinate of bottom-right corner of the extracted image"},"bottom_right_y":{"type":"integer","nullable":true,"description":"Y coordinate of bottom-right corner of the extracted image"},"image_base64":{"type":"string","nullable":true,"format":"uri","description":"Base64 string of the extracted image"}},"required":["id","top_left_x","top_left_y","bottom_right_x","bottom_right_y"]},"description":"List of all extracted images in the page"},"dimensions":{"type":"object","nullable":true,"properties":{"dpi":{"type":"integer"},"height":{"type":"integer"},"width":{"type":"integer"}},"required":["dpi","height","width"],"description":"The dimensions of the PDF page's screenshot image"}},"required":["index","markdown","images","dimensions"]},"description":"List of OCR info for pages"},"model":{"type":"string","enum":["mistral-ocr-latest"],"description":"The model used to generate the OCR."},"usage_info":{"type":"object","properties":{"pages_processed":{"type":"integer","description":"Number of pages processed"},"doc_size_bytes":{"type":"integer","nullable":true,"description":"Document size in bytes"}},"required":["pages_processed","doc_size_bytes"],"description":"Usage info for the OCR request."}},"required":["pages","model","usage_info"]}}},"paths":{"/v1/ocr":{"post":{"operationId":"DocumentModelsController_processOCRRequest_v1","summary":"Extract text from images using OCR.","description":"Performs optical character recognition (OCR) to extract text from images, enabling text-based analysis, data extraction, and automation workflows from visual data.","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["google/gc-document-ai"]},"document":{"anyOf":[{"type":"string","format":"uri"},{"type":"string"}],"description":"The document file to be processed by the OCR model."},"mimeType":{"type":"string","enum":["application/pdf","image/gif","image/tiff","image/jpeg","image/png","image/bmp","image/webp","text/html"],"description":"The MIME type of the document."},"pages":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["start"]},"start":{"type":"integer","minimum":1}},"required":["type","start"]},{"type":"object","properties":{"type":{"type":"string","enum":["end"]},"end":{"type":"integer","minimum":1}},"required":["type","end"]},{"type":"object","properties":{"type":{"type":"string","enum":["range"]},"start":{"type":"integer","minimum":1},"end":{"type":"integer","minimum":2}},"required":["type","start","end"]},{"type":"object","properties":{"type":{"type":"string","enum":["indices"]},"indices":{"type":"array","items":{"type":"integer","minimum":1},"maxItems":15}},"required":["type","indices"]}],"description":"Specific pages you wants to process"}},"required":["document"],"additionalProperties":false}}}},"responses":{"201":{"description":"Successfully processed document with OCR","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Vision.v1.OCRResponseDTO"}}}}},"tags":["Vision Models"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/embedding-models/google.md # Source: https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition/google.md # Source: https://docs.aimlapi.com/api-references/music-models/google.md # Source: https://docs.aimlapi.com/api-references/video-models/google.md # Source: https://docs.aimlapi.com/api-references/image-models/google.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/google.md # Google - [gemini-2.0-flash-exp](/api-references/text-models-llm/google/gemini-2.0-flash-exp.md) - [gemini-2.0-flash](/api-references/text-models-llm/google/gemini-2.0-flash.md) - [gemini-2.5-flash-lite-preview](/api-references/text-models-llm/google/gemini-2.5-flash-lite-preview.md) - [gemini-2.5-flash](/api-references/text-models-llm/google/gemini-2.5-flash.md) - [gemini-2.5-pro](/api-references/text-models-llm/google/gemini-2.5-pro.md) - [gemini-3-pro-preview](/api-references/text-models-llm/google/gemini-3-pro-preview.md) - [gemma-3 (4B and 12B)](/api-references/text-models-llm/google/gemma-3.md) - [gemma-3 (27B)](/api-references/text-models-llm/google/gemma-3-27b.md) - [gemma-3n-4b](/api-references/text-models-llm/google/gemma-3n-4b.md) - [gemini-3-flash-preview](/api-references/text-models-llm/google/gemini-3-flash-preview.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo.md # gpt-3.5-turbo

This documentation is valid for the following list of our models:

  • gpt-3.5-turbo
  • gpt-3.5-turbo-0125
  • gpt-3.5-turbo-1106
Try in Playground
## Model Overview This model builds on the capabilities of earlier versions, offering improved natural language understanding and generation for more realistic and contextually relevant conversations. It excels in handling a wide range of conversational scenarios, providing responses that are not only accurate but also contextually appropriate. You can also view [a detailed comparison of this model](https://aimlapi.com/comparisons/llama-3-vs-chatgpt-3-5-comparison) on our main website. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-3.5-turbo","gpt-3.5-turbo-0125","gpt-3.5-turbo-1106"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"gpt-3.5-turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-3.5-turbo","gpt-3.5-turbo-0125","gpt-3.5-turbo-1106"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"gpt-3.5-turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-3.5-turbo-0125", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-3.5-turbo-0125', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BKKS4Aulz4SaVm81hHo7HMKEcQmtk', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'refusal': None, 'annotations': []}}], 'created': 1744184876, 'model': 'gpt-3.5-turbo-0125', 'usage': {'prompt_tokens': 50, 'completion_tokens': 126, 'total_tokens': 176, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': None} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-3.5-turbo", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-3.5-turbo', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-3.5-turbo", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-preview.md # gpt-4-preview

This documentation is valid for the following list of our models:

  • gpt-4-0125-preview
  • gpt-4-1106-preview
Try in Playground
## Model Overview Before the release of GPT-4 Turbo, OpenAI introduced two preview models that allowed users to test advanced features ahead of a full rollout. These models supported JSON mode for structured responses, parallel function calling to handle multiple API functions in a single request, and reproducible output, ensuring more consistent results across runs. The model has better code generation performance, reduces cases where the model doesn't complete a task. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4-0125-preview","gpt-4-1106-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."}},"required":["model","messages"],"title":"gpt-4-0125-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4-0125-preview","gpt-4-1106-preview"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"gpt-4-0125-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4-0125-preview", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4-0125-preview', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BKKXr9a69c5WOJr8R2d8rP2Wd0XZa', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'refusal': None, 'annotations': []}}], 'created': 1744185235, 'model': 'gpt-4-1106-preview', 'usage': {'prompt_tokens': 168, 'completion_tokens': 630, 'total_tokens': 798, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': None} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4-0125-preview", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4-0125-preview', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-4-0125-preview", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo.md # gpt-4-turbo

This documentation is valid for the following list of our models:

  • gpt-4-turbo
  • gpt-4-turbo-2024-04-09
Try in Playground
## Model Overview The model enhances the already impressive capabilities of [gpt-4](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4) by significantly reducing response times, making it ideal for applications requiring instant feedback. Replacement for all previous [gpt-4-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-preview) models. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4-turbo","gpt-4-turbo-2024-04-09"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."}},"required":["model","messages"],"title":"gpt-4-turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4-turbo","gpt-4-turbo-2024-04-09"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"gpt-4-turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4-turbo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4-turbo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BKKYo5xJ5uEzm8omnidM097vsMpYd', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'refusal': None, 'annotations': []}}], 'created': 1744185294, 'model': 'gpt-4-turbo-2024-04-09', 'usage': {'prompt_tokens': 168, 'completion_tokens': 630, 'total_tokens': 798, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_101a39fff3'} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4-turbo", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4-turbo', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-4-turbo", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-mini.md # gpt-4.1-mini {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-4.1-mini-2025-04-14` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview All models of the GPT-4.1 family outperform [GPT‑4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) and [GPT‑4o mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) across the board, with major gains in coding and instruction following. They also have larger context windows—supporting up to 1 million tokens of context—and are able to better use that context with improved long-context comprehension. They feature a refreshed knowledge cutoff of June 2024. This model, **GPT-4.1 mini**, is an impressive improvement in small model capabilities: it outperforms [GPT‑4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) on many benchmarks, matches its reasoning ability, and runs faster and cheaper—nearly half the latency and just a fraction of the cost. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-4.1-mini-2025-04-14"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."}},"required":["model","messages"],"title":"openai/gpt-4.1-mini-2025-04-14"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-4.1-mini-2025-04-14"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-4.1-mini-2025-04-14"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-4.1-mini-2025-04-14", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-4.1-mini-2025-04-14', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BMsKsl6Q6IDdi8dudAXqx1v45wpL2', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I help you today?', 'refusal': None, 'annotations': []}}], 'created': 1744791782, 'model': 'gpt-4.1-mini-2025-04-14', 'usage': {'prompt_tokens': 7, 'completion_tokens': 34, 'total_tokens': 41, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_38647f5e19'} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-4.1-mini-2025-04-14", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-4.1-mini-2025-04-14', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "openai/gpt-4.1-mini-2025-04-14", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-nano.md # gpt-4.1-nano {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-4.1-nano-2025-04-14` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview All models of the GPT-4.1 family outperform [GPT‑4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) and [GPT‑4o mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) across the board, with major gains in coding and instruction following. They also have larger context windows—supporting up to 1 million tokens of context—and are able to better use that context with improved long-context comprehension. They feature a refreshed knowledge cutoff of June 2024. This model, **GPT-4.1 nano**, is fast, affordable, and powerful. It handles long context (1M tokens) and beats [GPT‑4o mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) on key benchmarks. Perfect for use cases like classification or autocomplete. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-4.1-nano-2025-04-14"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."}},"required":["model","messages"],"title":"openai/gpt-4.1-nano-2025-04-14"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-4.1-nano-2025-04-14"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-4.1-nano-2025-04-14"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-4.1-nano-2025-04-14", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-4.1-nano-2025-04-14', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 move exampes up for openai{'id': 'chatcmpl-BMsNQOMzKxRBsGg98yUiakieydWT3', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'refusal': None, 'annotations': []}}], 'created': 1744791940, 'model': 'gpt-4.1-nano-2025-04-14', 'usage': {'prompt_tokens': 2, 'completion_tokens': 8, 'total_tokens': 10, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_c1fb89028d'} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-4.1-nano-2025-04-14", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-4.1-nano-2025-04-14', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "openai/gpt-4.1-nano-2025-04-14", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1.md # gpt-4.1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-4.1-2025-04-14` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview All models of the GPT-4.1 family outperform [GPT‑4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) and [GPT‑4o mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) across the board, with major gains in coding and instruction following. They also have larger context windows—supporting up to 1 million tokens of context—and are able to better use that context with improved long-context comprehension. They feature a refreshed knowledge cutoff of June 2024. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-4.1-2025-04-14"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."}},"required":["model","messages"],"title":"openai/gpt-4.1-2025-04-14"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-4.1-2025-04-14"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-4.1-2025-04-14"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-4.1-2025-04-14", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-4.1-2025-04-14', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BMsFuMklM9ddR18VEX668LnZMw1Z8', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I help you today?', 'refusal': None, 'annotations': []}}], 'created': 1744791474, 'model': 'gpt-4.1-2025-04-14', 'usage': {'prompt_tokens': 34, 'completion_tokens': 168, 'total_tokens': 202, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_b38e740b47'} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-4.1-2025-04-14", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-4.1-2025-04-14', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "openai/gpt-4.1-2025-04-14", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.md # gpt-4

This documentation is valid for the following model:

  • gpt-4
Try in Playground
## Model Overview The model represents a significant leap forward in conversational AI technology. It offers enhanced understanding and generation of natural language, capable of handling complex and nuanced dialogues with greater coherence and context sensitivity. This model is designed to mimic human-like conversation more closely than ever before. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."}},"required":["model","messages"],"title":"gpt-4"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"gpt-4"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BKKWkzVpUFHEDbw7MlOsqBIbm9Vi2', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'refusal': None, 'annotations': []}}], 'created': 1744185166, 'model': 'gpt-4-0613', 'usage': {'prompt_tokens': 504, 'completion_tokens': 1260, 'total_tokens': 1764, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': None} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-4", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-audio-preview.md # gpt-4o-audio-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `gpt-4o-audio-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A text model with a support for audio prompts and the ability to generate spoken audio responses. This expansion enhances the potential for AI applications in text and voice-based interactions and audio analysis. You can choose from a wide range of audio formats for output and specify the voice the model will use for audio responses. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4o-audio-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"}},"required":["model","messages"],"title":"gpt-4o-audio-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from openai import OpenAI import base64 import os client = OpenAI( base_url = "https://api.aimlapi.com", # Insert your AI/ML API key instead of : api_key = "" ) def main(): response = client.chat.completions.create( model="gpt-4o-audio-preview", modalities=["text", "audio"], audio={"voice": "alloy", "format": "wav"}, messages=[ { "role": "system", "content": "Speak english" # Your instructions for the model }, { "role": "user", "content": "Hello" # Your question (insert it istead of Hello) } ], max_tokens=6000, ) wav_bytes = base64.b64decode(response.choices[0].message.audio.data) with open("audio.wav", "wb") as f: f.write(wav_bytes) dist = os.path.abspath("audio.wav") print("Audio saved to:", dist) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% hint style="warning" %} We’ve omitted 99% of the base64-encoded file for brevity — even for such a short model response, it’s still extremely large. {% endhint %} {% code overflow="wrap" %} ```json5 ChatCompletion(id='chatcmpl-BrgY0KMxWgy1EHUxYJC49MuMNmdOP', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', annotations=[], audio=ChatCompletionAudio(id='audio_686f73ecf0648191a602c4f315cad928', data='UklGRv////9XQVZFZm10IBAAAAABAAEAwF0AAIC7AAACABAAZGF0Yf////8YABAAEgAXABEAFwASABQAFQAVABcADAAPAAsAEgAOABEACwANABAACgALAAMADQAHABAACAAKAAcACgAFAAQACAAHAAUABQAFAAIACAAAAAgA/v8BAP7////8//b/AQD1/wMA9P/9//X/+f/3//H/+v/1//3/6v/5/+n/9P/u//X/8v/w//P/7v/z/+v/9f/q//T/6//r/+r/6P/s/+P/7P/l/+b/4f/g/+X/3//m/9//6f/l/+X/6f/e/+r/3//l/9n/3f/g/9r/2//V/9z/1P/g/93/4//f/+T/5//q/+X/4//h/9v/3f/X/97/0//Z/9L/2v/Z/9v/2//f/+X/4P/k/+P/4v/h/+H/3P/i/9//3P/f/9n/3f/d/+P/3f/k/97/5P/g/+n/5f/p/+r/6//n/+z/7f/t//D/6//v/+v/6v/m/+L/4v/n/+r/6P/u/+7/9v/7/wEAAQAAAP7/+P/6//L/7v/o/+H/5f/b/+f/4v/1//L///8EAAIADQAJABkADwARAAoADAABAP7/+//5//n/9f8AAPr/BAD//AwABAAYA//8CAP3/AgABAAUABAD8/wQAAQAFAP7/BAABAAEA/////wIAAAADAAIA/v/+//z////7/wEA/P8AAP///v8EAPz//P/9/wQAAQD8/wAAAQD///z/AgD7//7/+/8AAAAA+/8AAP3//v/9/wUAAwD///7/AwACAAIAAgAAAPv/AQD8/wYAAgD7//r/AgABAAAABQD5/wUAAgADAP//AQAFAPn/AQD7/wYA+//9//n//v/7//r/AAD8/wMA//8BAP//AwD9/wMA/f/+//z/+//9//n//v/+/wQAAgACAP7/AwD//wEAAAD8//v/AgD6/wQA/f8AAPn/AAD9//z/AQD//wEA/P/6//7//P/+//7//P8AAPj//P///wIA+v/9/wAA+/8CAP///f/9//r/BQD+/wgAAAADAP3/AQACAAMABAD8/wEA+/8GAP3//v/6/wIA///9/wEA+v8EAPf/AAD5/wUA9/8AAAAA/P8AAPn/AQD3/wMA/P/8//3//v//////AAD8/////P8CAP//BAD7/wUA/P8CAP3///8AAPn/AwD3/wkA/f8FAPr/AwD9//3/AQD1/wEA+//+//v/AwADAAAA///9/wIA/f8DAPz//P/9///////6//7//f8AAAAAAQD+//v/AQD7/////P8AAP7//v////r//v8BAAQA+v/+//z//P8AAP7/AwD8/wAAAQD4/////v8DAP7///8AAPz//P/7/wIA///8//z//f/8//z/AQD8//v//f/7//v/+f/8//z/+/////z//v8AAAAA/v/6/wAA/f8AAPj/AAD+/wIAAgD5//3//P/+//r//v///wAA///9///// !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!WE’VE OMITTED 90% OF THE BASE64-ENCODED FILE FOR BREVITY — EVEN FOR SUCH A SHORT MODEL RESPONSE, IT’S STILL EXTREMELY LARGE. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!wUAAwAFAAQABgACAAIAAgACAAYAAwAFAAEAAQD///7/AAACAAQAAAD+////AQAAAP//AQADAAMAAgADAAIAAAACAAUABQADAAUABgAGAAcABgAGAAUABQAFAAYABQAFAAgABwAKAAoABwAJAAUABwAIAAgACQAGAAgABQAJAAcABwAJAAcACgAGAAgABAAEAAMAAgAGAAQABAADAAYABQAEAAYAAwAFAAIAAwAGAAYABQADAAQAAAABAAEAAgACAAEAAAD8/////f/+//r/+f/5//f/+P/2//j/9//7//j//P/7//z/+v/6//z/+P/6//f/+//6//r/+v/4//v/+v/6//r//f/6//n//f/8//3/+//9//3////9//3//f/8//v/+/8AAP3//f/6//r//v/6//z/9//6//j/+f/4//r/+f/3//f/9f/3//L/8f/0//P/9P/1//X/8//1//H/9f/z//b/9v/2//j/9P/2//P/+P/0//f/+P/1//X/9f/2//X/9P/1//L/8v/1//P/9P/1//X/9v/4//X/9v/3//n/+v/6//n/+f/3//r/8f/1//P/8//4//j//f/6//v/+P/+//v/+P////z/AwABAA0AAgAOAAYADgAPAA0ACwAEAAwABAD+//3//v///wAABQAAAA4AFwAGABgAFQAgAAQA8f8BAPj/NQAUAAoAJAAXADsABQD9//v/DwAKABYABQA7AC4A2/8=', expires_at=1752138236, transcript="Hi there! How's it going?"), function_call=None, tool_calls=None))], created=1752134636, model='gpt-4o-audio-preview-2025-06-03', object='chat.completion', service_tier=None, system_fingerprint='fp_b5d60d6081', usage=CompletionUsage(completion_tokens=5838, prompt_tokens=74, total_tokens=5912, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=33, reasoning_tokens=0, rejected_prediction_tokens=0, text_tokens=14), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0, text_tokens=14, image_tokens=0))) ``` {% endcode %}
{% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini-audio-preview.md # gpt-4o-mini-audio-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `gpt-4o-mini-audio-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A preview release of the smaller GPT-4o Audio mini model. Handles both audio and text as input and output via the REST API. You can choose from a wide range of audio formats for output and specify the voice the model will use for audio responses. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4o-mini-audio-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."},"format":{"type":"string","enum":["wav","mp3"],"description":"The format of the encoded audio data. Currently supports \"wav\" and \"mp3\"."}},"required":["data","format"]}},"required":["type","input_audio"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for a previous audio response from the model."}},"required":["id"],"description":"Data about a previous audio response from the model."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"audio":{"type":"object","nullable":true,"properties":{"format":{"type":"string","enum":["wav","mp3","flac","opus","pcm16"],"description":"Specifies the output audio format. Must be one of wav, mp3, flac, opus, or pcm16."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer"],"description":"The voice the model uses to respond. Supported voices are alloy, ash, ballad, coral, echo, fable, nova, onyx, sage, and shimmer."}},"required":["format","voice"],"description":"Parameters for audio output. Required when audio output is requested with modalities: [\"audio\"]."},"modalities":{"type":"array","nullable":true,"items":{"type":"string","enum":["text","audio"]},"description":"Output types that you would like the model to generate. Most models are capable of generating text, which is the default:\n \n [\"text\"]\n \n Model can also be used to generate audio. To request that this model generate both text and audio responses, you can use:\n \n [\"text\", \"audio\"]"}},"required":["model","messages"],"title":"gpt-4o-mini-audio-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from openai import OpenAI import base64 import os client = OpenAI( base_url = "https://api.aimlapi.com", # Insert your AI/ML API key instead of : api_key = "" ) def main(): response = client.chat.completions.create( model="gpt-4o-mini-audio-preview", modalities=["text", "audio"], audio={"voice": "alloy", "format": "wav"}, messages=[ { "role": "system", "content": "Speak english" # Your instructions for the model }, { "role": "user", "content": "Hello" # Your question (insert it istead of Hello) } ], max_tokens=6000, ) wav_bytes = base64.b64decode(response.choices[0].message.audio.data) with open("audio.wav", "wb") as f: f.write(wav_bytes) dist = os.path.abspath("audio.wav") print("Audio saved to:", dist) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% hint style="warning" %} We’ve omitted 99% of the base64-encoded file for brevity — even for such a short model response, it’s still extremely large. {% endhint %} {% code overflow="wrap" %} ```json5 ChatCompletion(id='chatcmpl-BrghGGR73s5Wt5thg4mhAxquxzmBi', choices=[Choice(finish_reason='stop', index=0, logprobs=None, message=ChatCompletionMessage(content=None, refusal=None, role='assistant', annotations=[], audio=ChatCompletionAudio(id='audio_686f762b97b08191bb5ea391c6b41e1c', data='UklGRv////9XQVZFZm10IBAAAAABAAEAwF0AAIC7AAACABAAZGF0Yf////8MAAEABAAIAAIACQADAAcACAAKAAwAAAAGAAEACQADAAkAAAAFAAcAAgAEAPr/BQD8/wgA/f8CAPz/AQD+//r/AgAAAAEA/f8BAP3/AwD//wMA/P/6//z/+//6//X//f/2//7/9f/6//b/+f/4//L/+v/3//3/7//8/+7/+f/x//n/8f/z//P/8P/z/+v/+v/q//r/7f/x/+//8P/2/+z/9//s//H/6P/o/+v/5f/t/+X/7//q/+v/7//m//D/6f/t/+T/5//u/+b/6f/j/+n/4//s/+3/7v/s/+3/8f/y/+7/7P/r/+r/6v/p/+3/6P/q/+j/7v/t/+//7v/y//P/8f/x//D/7f/v/+3/6v/v/+3/7f/w/+3/8P/w//X/7//0/+//8//u//P/7P/v/+v/7//q//H/8f/0//j/9//7//b/+P/y//D/7//y//H/7f/u/+3/8f/1//z/+f/+//r/+v/7//n/9v/y/+7/8f/q//H/7P/3//b//f8DAPz/BAD+/woAAQACAP7/AAD6//j/+v/8/////OKAfkNkRRbFyoUoBGnCgAJHQkeDGkUjRtII+glVSdfJmcj+yAkHS0cZxocGtYZzRfuFhwWRhZdFv8VVhTgEAEMVgahAHT8Afqg+uX8AADCAsUC0gB2/DD3OfJt7znvwPFh9uT7R/+YAGf/Cvz1+F/2hPUX93L6Tv9VA5MGbweQBhsFQQI7AW//BQCEALIBIQPdAigDwQD1/FIAeQIfCH0MMBDzFTAaOB9kIKchGyAsHkwavhUcEmkNRwzFCU8JgghqBwYGIAUuBlAHBweo/470YegV3+DZl9rx3KTek+Kx4+Lo2vL0/f0JfRHLFEkUEBGnDFwHUAHw+0D2Yu8L6irmcOSP5FXo8+0l9P/6+P2r/7MBPwPfBPgFOAV1Ax0CRQAwAUwFwgkHD6ESERbkGQsd1CCRIYkhKh/2GVQWphDzC6QJIwYqBEQDGQLjAWAAUgBB/yL9Pfzg9ObrGeN82wLaNNtn34XikObP6QzvOPqLAz0Q2BVwFDQTpw34CWIFOf/f+BHys+p15Z3i5OL05TfrVvCj9XT8NAA7BAsI6gkpDR0OOQ2oCzUILQcBCNcJzwymEFITEhWZF8EZ/BztHtkehhuTFjcSVw0FCgUGKQOW/+T69/ju9hX21/UI9MbwYu8Z7V/n0eSa4angy+Na5NnnR+0V8mP7cAM7C4MTYRU4E8sO5QsDCGsDZv439MXsSuiy4xrjJ+Zt6W/uIvPL9jj+GgUTCwQQyhKvFKcVBhRQEI0Odw1+DDYN7g20DlARjRKpE1AXUhqnG/ga3RdNFAAR5gyvCCIDr/4n+ZjzCvDZ7Zbu3Oyv6/bpseYl5ivl1eJs41zlvOdp7BLwsPeOAIcIvg6ZEBkScw/uDKcJXwSHAFn7hfJo6o7mIuST5zLpfeuV8U708vt1AcoH4g8YFA0YgBbPFe4UcxKxECQOTA7cDNIMywxxDFkQmhP5FcsXERgeFxQW9RKFDmQLkgZ1AKP6UvRB78nsDeoW6NLmneWD5Abi+eGS4VDjL+Vi5/jrcfDA+BgAxAd5DqUTzxMYEqkPIQk/CZgCBfyh+MHtGOkG6Gvm6+ms7+fxl/UW+lr/RQfTDz8VShe5GA4XuRXDE54RkhFNEKkOLgxAC50LZQ2hEEsSXRUVFs0UtBLTDy8OOAtQB+8APvoy9OLudesq6IbnquUq5P/iUOHM4aviM+TO5VzqXe0E8+35uv7pCF4OgRJbFfoRHg41CaYDVf/B+oP2EvCf6CDn/uUR6kzx+PSH+z3/IANxCl4QthfAHOsblxrJFnAUYRPFEHMQ2A2lC+cKsAoWDYQP8RFTEzAU8xTOEyISqg6qCqMG3AAS+4L03u6F6fnk4+I+4c7hxOAt4DXhteA75D3nHuoq8Pz0j/rrAGUHSg0uE9wUrBTqEcMK8AXY/mb6nfWo8SfvbOkL6Vfp3e2S9C39UAOsBS8Lgw0kFL0YqRvYHRwZIxa0Ef4NUw7yDYwNLwzPCvsLPQ3nD+0RDhOjEysS9hB3DF4JawVe/0L7QvSb7uLpbuSe4NfeFd7U3tLgqeAD42rl0+fN7Hjx2/e4/V8DAwi5C74P0BGtEb0OQwrlAqj72/Sa73TuJu3c67zr7+tb7yP1+ftaBEoKOg9PEQoSGBW5F10bFRqZF/ATag74DfAMRw60EFIRbxGqEIgRjxIaFC0U2xLtEJcMoQjpAoX9nvij8uvuE+pb5hfjBN9J3vXdEeC/4szjPuY+6H3qlu+x9Jb7YwLCB/ALkg5dEbUR0RBFDekHNQAZ+RDzXe7L7X7tSO5u7qLwa/RF+VUBjAhcEJET0RX1FREWmBp9GXgaHReREacOkwkQC5sLWA9EEZMO4RA1Dx0SIhTrEu0Thg+hC3wGey/4UBngeFDM4OPxSoEwYT+RLJEpwSQRJeFIoPfBAZDS4Igw3iDIgQSRP1Ef0RZBPEFgAadh+OINIfASABEADQAPAA4ADQAQABEACwAPAAwADgAOAA8ADgALAAwADAAOAA8ADwANAA4ADgAOAA4ADQAOAA0ADAAMAAwADQAQAA8ADQAPAA4ADwAQABAAEAATABMAFAAUABUAFQAWABkAFwAZABwAHwAgACIAJAAlAC!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!WE’VE OMITTED 90% OF THE BASE64-ENCODED FILE FOR BREVITY — EVEN FOR SUCH A SHORT MODEL RESPONSE, IT’S STILL EXTREMELY LARGE. !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!cAKAAoACsALAAuADIAMwA4ADsAOAA5ADgAOgA7ADoAOwA7AD8APQA+ADwAPQA+AD8AQQA+AD8APAA9ADsAOwA8ADwAOwA7ADoAOwA4ADoANQA1ADEAMQAyAC4ALAAnACUAIAAfABwAGgAaABUAFQASABAACgAIAAQA//8AAPv/+v/4//b/8v/0//L/9P/z//P/8//t/+7/6v/p/+f/5//o/+X/5P/k/+X/5f/l/+X/5P/h/97/3//g/93/2v/Z/9b/2P/Z/9j/1f/T/87/zv/O/87/zP/J/8j/zP/I/8f/w//C/8P/x//F/8b/xf/D/8P/w//F/8L/xf/J/8f/xf/H/8j/yv/K/8n/yv/L/8v/z//O/9D/zv/Q/9D/0v/Q/9P/1P/R/9P/1P/T/9X/1P/X/9b/2P/b/9n/2//c/97/3//h/97/3v/g/+P/5v/m/+T/5v/m/+n/5P/n/+X/5//u//D/9P/2//X/8//5//j/9///////AQAEAAsAAwAMAAQACgAPAA4ADgAJABEACQAEAAgACwALAA8AFgAWACUAKQAgACsAJQAvACAADwAbABoARgApACwANQArAEMAEQASAAoAEQAkADAAFABCAEEACQA=', expires_at=1752138811, transcript="Hi there! How's it going?"), function_call=None, tool_calls=None))], created=1752135210, model='gpt-4o-mini-audio-preview-2024-12-17', object='chat.completion', service_tier=None, system_fingerprint='fp_1dfa95e5cb', usage=CompletionUsage(completion_tokens=1278, prompt_tokens=4, total_tokens=1282, completion_tokens_details=CompletionTokensDetails(accepted_prediction_tokens=0, audio_tokens=30, reasoning_tokens=0, rejected_prediction_tokens=0, text_tokens=14), prompt_tokens_details=PromptTokensDetails(audio_tokens=0, cached_tokens=0, text_tokens=14, image_tokens=0))) Audio saved to: c:\Users\user\Documents\Python Scripts\LLMs\audio.wav ``` {% endcode %}
{% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini-search-preview.md # gpt-4o-mini-search-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `gpt-4o-mini-search-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A specialized model trained to understand and execute web search queries with the [Chat completions](https://docs.aimlapi.com/capabilities/completion-or-chat-models) API. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4o-mini-search-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]}},"required":["model","messages"],"title":"gpt-4o-mini-search-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4o-mini-search-preview", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4o-mini-search-preview', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-d5329df8-efab-48d8-b607-9e61dd14553b', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today? ', 'refusal': None, 'annotations': []}}], 'created': 1744217025, 'model': 'gpt-4o-mini-search-preview-2025-03-11', 'usage': {'prompt_tokens': 0, 'completion_tokens': 13, 'total_tokens': 13, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': ''} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai/gpt-4o-mini-transcribe.md # gpt-4o-mini-transcribe {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-4o-mini-transcribe` {% endhint %} ## Model Overview A speech-to-text model based on [GPT-4o mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) for audio transcription. It provides improved word error rates and more accurate language recognition compared to the original Whisper models. Recommended for use cases that require higher transcription accuracy. {% hint style="success" %} OpenAI STT models are priced based on tokens, similar to chat models. In practice, this means the cost primarily depends on the duration of the input audio. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["openai/gpt-4o-mini-transcribe"]},"language":{"type":"string","description":"The BCP-47 language tag that hints at the primary spoken language. Depending on the Model and API endpoint you choose only certain languages are available"},"prompt":{"type":"string","description":"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language."},"temperature":{"type":"number","minimum":0,"maximum":1,"default":0,"description":"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic."},"url":{"type":"string","format":"uri","description":"URL of the input audio file."}},"required":["model","url"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Example Code: Processing a Speech Audio File via URL Let's use the `openai/gpt-4o-mini-transcribe` model to transcribe the following audio fragment: {% embed url="" %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time import json base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Create and send a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "openai/gpt-4o-mini-transcribe", "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Request the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # Start the generation, then repeatedly request the result from the server every 10 sec. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 10 seconds.") time.sleep(10) else: # data = .json() print("Processing complete:") print(json.dumps(response_data["result"], indent=2, ensure_ascii=False)) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const baseUrl = "https://api.aimlapi.com/v1"; // Insert your AIML API Key instead of : const apiKey = ""; // Create and send a speech-to-text conversion task to the server async function createSTT() { const url = `${baseUrl}/stt/create`; const response = await fetch(url, { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify({ model: "openai/gpt-4o-mini-transcribe", url: "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3", }), }); if (!response.ok) { const text = await response.text(); console.error(`Error: ${response.status} - ${text}`); return null; } const data = await response.json(); console.log(data); return data; } // Request the result of the task from the server using the generation_id async function getSTT(genId) { const url = `${baseUrl}/stt/${genId}`; const response = await fetch(url, { headers: { "Authorization": `Bearer ${apiKey}`, }, }); if (!response.ok) { return null; } return response.json(); } // Start generation and poll every 10s async function main() { const sttResponse = await createSTT(); const genId = sttResponse?.generation_id; if (!genId) { console.error("No generation_id received"); return null; } const startTime = Date.now(); const timeoutMs = 600 * 1000; // 10 minutes while (Date.now() - startTime < timeoutMs) { const responseData = await getSTT(genId); if (!responseData) { console.error("Error: No response from API"); return null; } const status = responseData.status; if (status === "queued" || status === "generating") { console.log(`Status: ${status}. Checking again in 10 seconds.`); await new Promise(resolve => setTimeout(resolve, 10_000)); } else { console.log("Processing complete:"); console.log(JSON.stringify(responseData.result, null, 2)); return responseData; } } console.log("Timeout reached. Stopping."); return null; } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` {'generation_id': 'dzIgQQyw8KCfoI5clcbHZ', 'status': 'queued'} Status: queued. Checking again in 10 seconds. Processing complete: { "text": "He doesn't belong to you, and I don't see how you have anything to do with what is be his power of. He's he personified that from this stage to you.", "usage": { "type": "tokens", "total_tokens": 137, "input_tokens": 100, "input_token_details": { "text_tokens": 0, "audio_tokens": 100 }, "output_tokens": 37 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/openai/gpt-4o-mini-tts.md # gpt-4o-mini-tts {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-4o-mini-tts` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A text-to-speech model based on [GPT-4o mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini), supporting up to 2,000 input tokens. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["openai/gpt-4o-mini-tts"]},"text":{"type":"string","minLength":1,"maxLength":4096,"description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer","verse"],"default":"alloy","description":"Name of the voice to be used."},"style":{"type":"string","description":"Determines the style exaggeration of the voice. This setting attempts to amplify the style of the original speaker. It does consume additional computational resources and might increase latency if set to anything other than 0."},"response_format":{"type":"string","enum":["mp3","opus","aac","flac","wav","pcm"],"default":"mp3","description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."},"speed":{"type":"number","minimum":0.25,"maximum":4,"default":1,"description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests # Insert your AI/ML API key instead of : api_key = "" base_url = "https://api.aimlapi.com/v1" headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json", } data = { "model": "openai/gpt-4o-mini-tts", "text": "GPT-4o-mini-tts is a small and fast model. Use it to convert text to natural sounding spoken text.", "voice": "coral", } response = requests.post(f"{base_url}/tts", headers=headers, json=data) response.raise_for_status() result = response.json() print("Audio URL:", result["audio"]["url"]) ``` {% endcode %} {% endtab %} {% tab title="JaveScript" %} {% code overflow="wrap" %} ```javascript import axios from "axios"; // Insert your AI/ML API key instead of : const apiKey = ""; const baseURL = "https://api.aimlapi.com/v1"; const headers = { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }; const data = { model: "openai/gpt-4o-mini-tts", text: "GPT-4o-mini-tts is a small and fast model. Use it to convert text to natural sounding spoken text.", voice: "coral", }; const main = async () => { const response = await axios.post(`${baseURL}/tts`, data, { headers }); console.log("Audio URL:", response.data.audio.url); }; main().catch(console.error); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` Audio URL: https://cdn.aimlapi.com/generations/hedgehog/1760948488200-5a500947-2ec2-41b5-b77b-4b75c7913aad.mp3 ``` {% endcode %}
Listen to the audio sample we generated: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini.md # gpt-4o-mini

This documentation is valid for the following list of our models:

  • gpt-4o-mini
  • gpt-4o-mini-2024-07-18
Try in Playground
## Model Overview OpenAI's latest cost-efficient model designed to deliver advanced natural language processing and multimodal capabilities. It aims to make AI more accessible and affordable, significantly enhancing the range of applications that can utilize AI technology. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4o-mini","gpt-4o-mini-2024-07-18"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."}},"required":["model","messages"],"title":"gpt-4o-mini"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4o-mini","gpt-4o-mini-2024-07-18"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"gpt-4o-mini"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4o-mini", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4o-mini', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BKKaTWquxfp3dbSlNvUKM6mXwmZ78', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'refusal': None, 'annotations': []}}], 'created': 1744185397, 'model': 'gpt-4o-mini-2024-07-18', 'usage': {'prompt_tokens': 3, 'completion_tokens': 13, 'total_tokens': 16, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_b376dfbbd5'} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4o-mini", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4o-mini', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-4o-mini", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-search-preview.md # gpt-4o-search-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `gpt-4o-search-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A specialized model trained to understand and execute web search queries with the [Chat completions](https://docs.aimlapi.com/capabilities/completion-or-chat-models) API. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gpt-4o-search-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]}},"required":["model","messages"],"title":"gpt-4o-search-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4o-search-preview", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4o-search-preview', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-2d186134-834f-4b68-9c61-62d5a4f67872', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today? ', 'refusal': None, 'annotations': []}}], 'created': 1744217100, 'model': 'gpt-4o-search-preview-2025-03-11', 'usage': {'prompt_tokens': 5, 'completion_tokens': 210, 'total_tokens': 215, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': ''} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai/gpt-4o-transcribe.md # gpt-4o-transcribe {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-4o-transcribe` {% endhint %} ## Model Overview A speech-to-text model based on GPT-4o for audio transcription. It provides improved word error rates and more accurate language recognition compared to the original Whisper models. Recommended for use cases that require higher transcription accuracy. {% hint style="success" %} OpenAI STT models are priced based on tokens, similar to chat models. In practice, this means the cost primarily depends on the duration of the input audio. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["openai/gpt-4o-transcribe"]},"language":{"type":"string","description":"The BCP-47 language tag that hints at the primary spoken language. Depending on the Model and API endpoint you choose only certain languages are available"},"prompt":{"type":"string","description":"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language."},"temperature":{"type":"number","minimum":0,"maximum":1,"default":0,"description":"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic."},"url":{"type":"string","format":"uri","description":"URL of the input audio file."}},"required":["model","url"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example: Processing a Speech Audio File via URL Let's use the `openai/gpt-4o-transcribe` model to transcribe the following audio fragment: {% embed url="" %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time import json base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Create and send a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "openai/gpt-4o-transcribe", "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Request the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # Start the generation, then repeatedly request the result from the server every 10 sec. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 10 seconds.") time.sleep(10) else: # data = .json() print("Processing complete:") print(json.dumps(response_data["result"], indent=2, ensure_ascii=False)) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const baseUrl = "https://api.aimlapi.com/v1"; // Insert your AIML API Key instead of : const apiKey = ""; // Create and send a speech-to-text conversion task to the server async function createSTT() { const url = `${baseUrl}/stt/create`; const response = await fetch(url, { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify({ model: "openai/gpt-4o-transcribe", url: "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3", }), }); if (!response.ok) { const text = await response.text(); console.error(`Error: ${response.status} - ${text}`); return null; } const data = await response.json(); console.log(data); return data; } // Request the result of the task from the server using the generation_id async function getSTT(genId) { const url = `${baseUrl}/stt/${genId}`; const response = await fetch(url, { headers: { "Authorization": `Bearer ${apiKey}`, }, }); if (!response.ok) { return null; } return response.json(); } // Start generation and poll every 10s async function main() { const sttResponse = await createSTT(); const genId = sttResponse?.generation_id; if (!genId) { console.error("No generation_id received"); return null; } const startTime = Date.now(); const timeoutMs = 600 * 1000; // 10 minutes while (Date.now() - startTime < timeoutMs) { const responseData = await getSTT(genId); if (!responseData) { console.error("Error: No response from API"); return null; } const status = responseData.status; if (status === "queued" || status === "generating") { console.log(`Status: ${status}. Checking again in 10 seconds.`); await new Promise(resolve => setTimeout(resolve, 10_000)); } else { console.log("Processing complete:"); console.log(JSON.stringify(responseData.result, null, 2)); return responseData; } } console.log("Timeout reached. Stopping."); return null; } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` {'generation_id': 'RlLz0hRdAs9voL5Qi1Pzr', 'status': 'queued'} Status: queued. Checking again in 10 seconds. Processing complete: { "text": "He doesn't belong to you, and I don't see how you have anything to do with what is be his power. He's he personally that from this stage to you.", "usage": { "type": "tokens", "total_tokens": 135, "input_tokens": 100, "input_token_details": { "text_tokens": 0, "audio_tokens": 100 }, "output_tokens": 35 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o.md # gpt-4o {% hint style="warning" %} **Deprecation notice**\ `gpt-4o` will be removed from the API on **February 17, 2026**. Please migrate to [`gpt-5.1-chat-latest`](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-chat-latest). {% endhint %}

This documentation is valid for the following list of our models:

  • gpt-4o
  • chatgpt-4o-latest
  • gpt-4o-2024-05-13
  • gpt-4o-2024-08-06
Try in Playground
## Model Overview OpenAI's flagship model designed to integrate enhanced capabilities across text, vision, and audio, providing real-time reasoning. You can also view [a detailed comparison of this model](https://aimlapi.com/comparisons/qwen-2-vs-chatgpt-4o-comparison) on our main website. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-4o","gpt-4o","gpt-4o-2024-08-06","gpt-4o-2024-05-13","chatgpt-4o-latest"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."}},"required":["model","messages"],"title":"openai/gpt-4o"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-4o","gpt-4o","gpt-4o-2024-08-06","gpt-4o-2024-05-13","chatgpt-4o-latest"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-4o"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4o", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4o', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BKKZhTdruxKWjdUlq29ooeew185LD', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! 😊 How can I help you today?', 'refusal': None, 'annotations': []}}], 'created': 1744185349, 'model': 'chatgpt-4o-latest', 'usage': {'prompt_tokens': 84, 'completion_tokens': 347, 'total_tokens': 431, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_d04424daa8'} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gpt-4o", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gpt-4o', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-4o", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-chat-latest.md # gpt-5.1-chat-latest {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-1-chat-latest` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced conversational AI model delivering enhanced intelligence, warmth, and responsiveness. Designed as a low-latency, highly interactive model, it offers users a natural, engaging, and adaptive conversational experience. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-1-chat-latest"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/gpt-5-1-chat-latest"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-1-chat-latest"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-1-chat-latest"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-1-chat-latest", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-1-chat-latest', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-Cbr2xJaeoD76fVtuBcoIIrfVjGsIu", "object": "chat.completion", "created": 1763138083, "model": "gpt-5.1-chat-latest", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! How can I help you today?", "refusal": null, "annotations": [] }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 7, "completion_tokens": 12, "total_tokens": 19, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "service_tier": "default", "system_fingerprint": null } ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-1-chat-latest", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-1-chat-latest', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_0631acadb98188f30069175b4ad34881908b8dc68c29a730bf", "object": "response", "created_at": 1763138379, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5.1-chat-latest", "output": [ { "id": "msg_0631acadb98188f30069175b4bb8f481909d2f56504c2acee3", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello. How can I help?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 18, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 231, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 249 }, "metadata": {}, "output_text": "Hello. How can I help?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex-mini.md # gpt-5.1-codex-mini {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-1-codex-mini` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A lighter, more affordable variant of [GPT 5.1 Codex](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex) with reduced capabilities. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example-using-responses-endpoint) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. {% hint style="warning" %} Note: This model can ONLY be called via the `/responses` endpoint! {% endhint %} ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-1-codex-mini"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"],"description":"A tool call to run a command on the local shell."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"],"description":"The output of a local shell tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-1-codex-mini"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-1-codex-mini", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-1-codex-mini', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_0461a6b6943b501800691754e69aec819687ee01f3009daca8", "object": "response", "created_at": 1763136742, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5.1-codex-mini", "output": [ { "id": "rs_0461a6b6943b501800691754e71084819680afd9bc4a3da9c2", "type": "reasoning", "summary": [] }, { "id": "msg_0461a6b6943b501800691754e736588196b71ffb90155d51a4", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 4, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 63, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 67 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex.md # gpt-5.1-codex {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-1-codex` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A specialized edition of [GPT 5.1](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1) built for software engineering and coding workflows. It excels in both interactive development sessions and long, autonomous execution of complex engineering tasks. The model can build projects from scratch, develop features, debug, perform large-scale refactoring, and review code. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example-using-responses-endpoint) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. {% hint style="warning" %} Note: This model can ONLY be called via the `/responses` endpoint! {% endhint %} ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-1-codex"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"],"description":"A tool call to run a command on the local shell."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"],"description":"The output of a local shell tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-1-codex"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-1-codex", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-1-codex', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_011a79e86d7d08d1006917544d811c81949722761c21c37597", "object": "response", "created_at": 1763136589, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5.1-codex", "output": [ { "id": "rs_011a79e86d7d08d1006917544e35f081949283c10060a9072d", "type": "reasoning", "summary": [] }, { "id": "msg_011a79e86d7d08d1006917544e6e148194b6240dda25142f4d", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 18, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 315, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 333 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1.md # gpt-5.1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A flagship model for coding and agentic workflows, offering configurable reasoning and non-reasoning modes. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/gpt-5-1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-1"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-1', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-C2CISXQ7zuF4Hl0bYT7wZeTFaxZnx", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hi! How can I help you today?", "refusal": null, "annotations": [] } } ], "created": 1754639960, "model": "gpt-5-2025-08-07", "usage": { "prompt_tokens": 18, "completion_tokens": 1722, "total_tokens": 1740, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 64, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "system_fingerprint": null } ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-1", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-1', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_0f74268075d133690069175ad599308193b3e8e9de3f200897", "object": "response", "created_at": 1763138261, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5.1-2025-11-13", "output": [ { "id": "msg_0f74268075d133690069175ad66030819383270663fc98a5d1", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "none", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 18, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 399, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 417 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-chat.md # gpt-5-chat {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-chat-latest` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The non-reasoning version of [GPT‑5](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5). ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-chat-latest"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/gpt-5-chat-latest"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-chat-latest"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-chat-latest"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-chat-latest", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-chat-latest', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-C2LzHL8ho70oHYKMGSWr6wanLutvD", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hi there! 👋 How’s your day going?", "refusal": null, "annotations": [] } } ], "created": 1754677211, "model": "gpt-5-chat-latest", "usage": { "prompt_tokens": 21, "completion_tokens": 231, "total_tokens": 252, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "system_fingerprint": "fp_8e31f7e21a" } ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-chat-latest", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-chat-latest', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_68963fb142d08197b4d3ae3ad852542c054845c6ea84caa2", "object": "response", "created_at": 1754677169, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5-chat-latest", "output": [ { "id": "msg_68963fb1c5b88197b1ac96592463ffa7054845c6ea84caa2", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hi! How’s your day going?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": null, "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 21, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 189, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 210 }, "metadata": {}, "output_text": "Hi! How’s your day going?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-mini.md # gpt-5-mini {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-mini-2025-08-07` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A quicker, more budget-friendly variant of [GPT-5](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5), ideal for clear tasks and precise prompts. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-mini-2025-08-07"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/gpt-5-mini-2025-08-07"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-mini-2025-08-07"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-mini-2025-08-07"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-mini-2025-08-07", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-mini-2025-08-07', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-C2Ci53r5nVlzpplpprLjgK0qyMbxm", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hi — how can I help you today? \n\nHere are a few things I can do if you want ideas:\n- Answer questions or explain something\n- Help draft or edit text (emails, essays, resumes)\n- Write or debug code\n- Brainstorm ideas or plan projects\n- Summarize articles or long documents\n- Translate or practice another language\n\nWhat do you need?", "refusal": null, "annotations": [] } } ], "created": 1754641549, "model": "gpt-5-mini-2025-08-07", "usage": { "prompt_tokens": 4, "completion_tokens": 903, "total_tokens": 907, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 128, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "system_fingerprint": null } ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-mini-2025-08-07", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-mini-2025-08-07', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_68961d7996008194ad93b77daf572aa0093e9cc7b27f0232", "object": "response", "created_at": 1754668409, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5-mini-2025-08-07", "output": [ { "id": "rs_68961d7be8448194b451f9b886730d0e093e9cc7b27f0232", "type": "reasoning", "summary": [] }, { "id": "msg_68961d7c5f348194acab9c1ae00d3baf093e9cc7b27f0232", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hi — how can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 4, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 63, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 67 }, "metadata": {}, "output_text": "Hi — how can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-nano.md # gpt-5-nano {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-nano-2025-08-07` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The smallest, fastest, and most affordable model in the GPT-5 lineup. While it’s less capable than [GPT-5](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5) and [GPT-5-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-mini) for math and coding tasks, it excels at everyday questions and offers solid response speed. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-nano-2025-08-07"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/gpt-5-nano-2025-08-07"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-nano-2025-08-07"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-nano-2025-08-07"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-nano-2025-08-07", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-nano-2025-08-07', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-C2KfH9AvnoYVczpOq4JXrtYK3nw1K", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! Nice to meet you. What would you like to do today? I can help with things like:\n- explain concepts or answer questions\n- draft or edit text\n- brainstorm ideas\n- write or debug code\n- summarize articles or documents\n- plan trips or schedules\n- learn a new skill or topic\n\nTell me what you’re interested in or ask me something specific.", "refusal": null, "annotations": [] } } ], "created": 1754672127, "model": "gpt-5-nano-2025-08-07", "usage": { "prompt_tokens": 1, "completion_tokens": 342, "total_tokens": 343, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 320, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "system_fingerprint": null } ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-nano-2025-08-07", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-nano-2025-08-07', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_68962ccb73048196ac4008f5a533a3f50f382c4041cc6f52", "object": "response", "created_at": 1754672331, "error": null, "incomplete_details": { "reason": "max_output_tokens" }, "instructions": null, "max_output_tokens": 512, "model": "gpt-5-nano-2025-08-07", "output": [ { "id": "rs_68962ccc3b308196ae895cb1ea6a41d90f382c4041cc6f52", "type": "reasoning", "summary": [] }, { "id": "msg_68962ccfdaf48196b1d198c0b1eef3c50f382c4041cc6f52", "type": "message", "status": "incomplete", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hi there! How can I help today?\n\nI can assist with a wide range of things, for example:\n- Answer questions or explain concepts\n- Draft or edit emails, essays, resumes, or reports\n- Generate ideas for projects, stories, or presentations" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 1, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 424, "output_tokens_details": { "reasoning_tokens": 448 }, "total_tokens": 425 }, "metadata": {}, "output_text": "Hi there! How can I help today?\n\nI can assist with a wide range of things, for example:\n- Answer questions or explain concepts\n- Draft or edit emails, essays, resumes, or reports\n- Generate ideas for projects, stories, or presentations" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-pro.md # gpt-5-pro {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-pro` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Version of [GPT-5](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5) that produces smarter and more precise responses. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example-using-responses-endpoint) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema {% hint style="warning" %} Note: This model can ONLY be called via the `/responses` endpoint! {% endhint %}
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-pro"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-pro"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-pro", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-pro', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_68963fb142d08197b4d3ae3ad852542c054845c6ea84caa2", "object": "response", "created_at": 1754677169, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5-chat-latest", "output": [ { "id": "msg_68963fb1c5b88197b1ac96592463ffa7054845c6ea84caa2", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hi! How’s your day going?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": null, "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 21, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 189, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 210 }, "metadata": {}, "output_text": "Hi! How’s your day going?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-chat-latest.md # gpt-5.2-chat-latest {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-2-chat-latest` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The most capable model series for professional knowledge work as of December 2025.\ Designed as a low-latency, highly interactive model, it offers users a natural, engaging, and adaptive conversational experience. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-2-chat-latest"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/gpt-5-2-chat-latest"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-2-chat-latest", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-2-chat-latest', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-Clk79duTHj2Vxfm6qQtyoGV4Wv7W2", "object": "chat.completion", "created": 1765494715, "model": "gpt-5.2-chat-latest", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! How can I help you today? 😊", "refusal": null, "annotations": [] }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 7, "completion_tokens": 13, "total_tokens": 20, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "service_tier": "default", "system_fingerprint": null, "meta": { "usage": { "credits_used": 409 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-codex.md # gpt-5.2-codex {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-2-codex` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A specialized edition of [GPT 5.2](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2) built for software engineering and coding workflows. It excels in both interactive development sessions and long, autonomous execution of complex engineering tasks. The model can build projects from scratch, develop features, debug, perform large-scale refactoring, and review code. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example-using-responses-endpoint) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. {% hint style="warning" %} Note: This model can ONLY be called via the `/responses` endpoint! {% endhint %} ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-2-codex"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"],"description":"A tool call to run a command on the local shell."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"],"description":"The output of a local shell tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-2-codex"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-2-codex", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-2-codex', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "9GEwjd_Z2_KEtQZARClau", "object": "response", "created_at": 1769503536, "status": "completed", "background": false, "billing": { "payer": "developer" }, "completed_at": 1769503538, "error": null, "frequency_penalty": 0, "incomplete_details": null, "instructions": null, "max_output_tokens": null, "max_tool_calls": null, "model": "gpt-5.2-codex", "output": [ { "id": "rs_0fb4c0cd166661bd0069787b3143dc8197bb5e5cf796d2be25", "type": "reasoning", "summary": [] }, { "id": "msg_0fb4c0cd166661bd0069787b31ce4c81979aedccb49fa3a181", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "presence_penalty": 0, "previous_response_id": null, "prompt_cache_key": null, "prompt_cache_retention": null, "reasoning": { "effort": "medium", "summary": null }, "safety_identifier": null, "service_tier": "default", "store": true, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_logprobs": 0, "top_p": 0.98, "truncation": "disabled", "usage": { "input_tokens": 7, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 38, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 45 }, "user": null, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-pro.md # gpt-5.2-pro {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-2-pro` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The Pro version is built for more challenging tasks and is available only through the Responses API, as it supports multi-turn interactions before generating a response. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference). {% endhint %}
## API Schema
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-2-pro"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-2-pro"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-2-pro", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-2-pro', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_0dd35be89958381600693b503b62048197834533b8a189267e", "object": "response", "created_at": 1765494843, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5.2-pro-2025-12-11", "output": [ { "id": "msg_0dd35be89958381600693b5042eb448197a7f6f830bc942150", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! What can I help you with today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 0.98, "truncation": "disabled", "usage": { "input_tokens": 309, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 4939, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 5248 }, "metadata": {}, "output_text": "Hello! What can I help you with today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2.md # gpt-5.2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The most capable model series for professional knowledge work as of December 2025. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/api-references/text-models-llm/openai/broken-reference). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-2"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/gpt-5-2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-2", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-2', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-Clk2iHw9Ks69NbBl9e0of5eEDExoq", "object": "chat.completion", "created": 1765494440, "model": "gpt-5.2-2025-12-11", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hello! What can I help you with today?", "refusal": null, "annotations": [] }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 7, "completion_tokens": 13, "total_tokens": 20, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "service_tier": "default", "system_fingerprint": null, "meta": { "usage": { "credits_used": 409 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.md # gpt-5 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-5-2025-08-07` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview OpenAI’s most advanced (and the most capable coding) model as of **August 2025**. It combines a versatile base model for most queries, a deeper reasoning mode (GPT-5 thinking) for complex tasks, and a real-time router that selects the right mode based on context, complexity, tool use, or explicit user instructions (for example, if you say “think hard about this” in the prompt). The new default in ChatGPT web service for signed-in users. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-2025-08-07"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/gpt-5-2025-08-07"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-5-2025-08-07"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/gpt-5-2025-08-07"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-2025-08-07", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-2025-08-07', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-C2CISXQ7zuF4Hl0bYT7wZeTFaxZnx", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hi! How can I help you today?", "refusal": null, "annotations": [] } } ], "created": 1754639960, "model": "gpt-5-2025-08-07", "usage": { "prompt_tokens": 18, "completion_tokens": 1722, "total_tokens": 1740, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 64, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "system_fingerprint": null } ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-5-2025-08-07", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-5-2025-08-07', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_689615e09cbc819691bdcfe813d70ef008df451ae8505013", "object": "response", "created_at": 1754666464, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "gpt-5-2025-08-07", "output": [ { "id": "rs_689615e28190819682811de8b45da02008df451ae8505013", "type": "reasoning", "summary": [] }, { "id": "msg_689615e715b08196ab92b475f4f3397e08df451ae8505013", "type": "message", "status": "completed", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hi! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" }, "verbosity": "medium" }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 18, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 3003, "output_tokens_details": { "reasoning_tokens": 128 }, "total_tokens": 3021 }, "metadata": {}, "output_text": "Hi! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/image-models/openai/gpt-image-1-5.md # gpt-image-1-5 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-image-1-5` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A powerful image generation and editing model that supports text-to-image, image-to-image, and inpainting with masks and reference inputs — all guided by a text prompt. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet). :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key. :black\_small\_square: Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request. :digit\_four: **(Optional)** **Adjust other optional parameters if needed** Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding **API schema**, which lists all available parameters and usage notes. :digit\_five: **Run your modified code** Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step-by-step, feel free to use our [**Quickstart guide.**](https://docs.aimlapi.com/quickstart/setting-up) {% endhint %}
## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"GPT-Image-1.5 - AI/ML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-image-1-5"]},"prompt":{"type":"string","maxLength":32000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"background":{"type":"string","enum":["transparent","opaque","auto"],"default":"auto","description":"Allows to set transparency for the background of the generated image(s). When auto is used, the model will automatically determine the best background for the image.\nIf transparent, the output format needs to support transparency, so it should be set to either png (default value) or webp."},"moderation":{"type":"string","enum":["low","auto"],"default":"auto","description":"Control the content-moderation level for images."},"n":{"type":"number","enum":[1],"default":1,"description":"The number of images to generate."},"output_compression":{"type":"integer","minimum":0,"maximum":100,"default":100,"description":"The compression level (0-100%) for the generated images."},"output_format":{"type":"string","enum":["png","jpeg","webp"],"default":"png","description":"The format of the generated image."},"quality":{"type":"string","enum":["low","high","medium"],"default":"medium","description":"The quality of the image that will be generated."},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"],"default":"1024x1024","description":"The size of the generated image."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."}},"required":["model","prompt"],"title":"openai/gpt-image-1-5"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ "Content-Type":"application/json", "Authorization":"Bearer ", }, json={ "model":"openai/gpt-image-1-5", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses." } ) data = response.json() print(data) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-image-1-5', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { 'id': 'gen-1733832000-example', 'object': 'image', 'created': 1733832000, 'model': 'openai/gpt-image-1-5', 'data': [ { 'url': 'https://cdn.aimlapi.com/generated-images/openai/gpt-image-1-5/example-output.png', 'revised_prompt': 'Example output for documentation.' } ], 'usage': { 'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0 } } ``` {% endcode %}
We obtained the following 1536×1024 image by running this code example (\~26 s):

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo."

--- # Source: https://docs.aimlapi.com/api-references/image-models/openai/gpt-image-1-mini-1.md # gpt-image-1-5 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-image-1-5` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A powerful image generation and editing model that supports text-to-image, image-to-image, and inpainting with masks and reference inputs — all guided by a text prompt. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet). :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key. :black\_small\_square: Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request. :digit\_four: **(Optional)** **Adjust other optional parameters if needed** Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding **API schema**, which lists all available parameters and usage notes. :digit\_five: **Run your modified code** Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step-by-step, feel free to use our [**Quickstart guide.**](https://docs.aimlapi.com/quickstart/setting-up) {% endhint %}
## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"GPT-Image-1.5 - AI/ML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-image-1-5"]},"prompt":{"type":"string","maxLength":32000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"background":{"type":"string","enum":["transparent","opaque","auto"],"default":"auto","description":"Allows to set transparency for the background of the generated image(s). When auto is used, the model will automatically determine the best background for the image.\nIf transparent, the output format needs to support transparency, so it should be set to either png (default value) or webp."},"moderation":{"type":"string","enum":["low","auto"],"default":"auto","description":"Control the content-moderation level for images."},"n":{"type":"number","enum":[1],"default":1,"description":"The number of images to generate."},"output_compression":{"type":"integer","minimum":0,"maximum":100,"default":100,"description":"The compression level (0-100%) for the generated images."},"output_format":{"type":"string","enum":["png","jpeg","webp"],"default":"png","description":"The format of the generated image."},"quality":{"type":"string","enum":["low","high","medium"],"default":"medium","description":"The quality of the image that will be generated."},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"],"default":"1024x1024","description":"The size of the generated image."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."}},"required":["model","prompt"],"title":"openai/gpt-image-1-5"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ "Content-Type":"application/json", "Authorization":"Bearer ", }, json={ "model":"openai/gpt-image-1-5", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses." } ) data = response.json() print(data) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-image-1-5', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { 'id': 'gen-1733832000-example', 'object': 'image', 'created': 1733832000, 'model': 'openai/gpt-image-1-5', 'data': [ { 'url': 'https://cdn.aimlapi.com/generated-images/openai/gpt-image-1-5/example-output.png', 'revised_prompt': 'Example output for documentation.' } ], 'usage': { 'prompt_tokens': 0, 'completion_tokens': 0, 'total_tokens': 0 } } ``` {% endcode %}
We obtained the following 1536×1024 image by running this code example (\~26 s):

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo."

--- # Source: https://docs.aimlapi.com/api-references/image-models/openai/gpt-image-1-mini.md # gpt-image-1-mini {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-image-1-mini` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A cost-effective text-to-image variant of [GPT Image 1](https://docs.aimlapi.com/api-references/image-models/openai/gpt-image-1). ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet). \ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key. \ :black\_small\_square: Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request. :digit\_four: **(Optional)** **Adjust other optional parameters if needed** Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding **API schema**, which lists all available parameters and usage notes. :digit\_five: **Run your modified code** Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step-by-step, feel free to use our [**Quickstart guide.**](https://docs.aimlapi.com/quickstart/setting-up) {% endhint %}
## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-image-1-mini"]},"prompt":{"type":"string","maxLength":32000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"background":{"type":"string","enum":["transparent","opaque","auto"],"default":"auto","description":"Allows to set transparency for the background of the generated image(s). When auto is used, the model will automatically determine the best background for the image.\nIf transparent, the output format needs to support transparency, so it should be set to either png (default value) or webp."},"moderation":{"type":"string","enum":["low","auto"],"default":"auto","description":"Control the content-moderation level for images."},"n":{"type":"number","enum":[1],"default":1,"description":"The number of images to generate."},"output_compression":{"type":"integer","minimum":0,"maximum":100,"default":100,"description":"The compression level (0-100%) for the generated images."},"output_format":{"type":"string","enum":["png","jpeg","webp"],"default":"png","description":"The format of the generated image."},"quality":{"type":"string","enum":["low","high","medium"],"default":"medium","description":"The quality of the image that will be generated."},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"],"default":"1024x1024","description":"The size of the generated image."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."}},"required":["model","prompt"],"title":"openai/gpt-image-1-mini"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json", }, json={ "model":"openai/gpt-image-1-mini", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.", "size": "1536x1024" } ) data = response.json() print(data) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-image-1-mini', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.', size: '1536x1024' }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'data': [{'b64_json': None, 'url': 'https://cdn.aimlapi.com/generations/openai-image-generation/1768241332314-897d9168-a4c5-4c0d-810c-b10b01e2c943.png'}], 'meta': {'usage': {'credits_used': 26465}}} ``` {% endcode %}
We obtained the following 1536x1024 image by running this code example (\~31 s):

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo."

--- # Source: https://docs.aimlapi.com/api-references/image-models/openai/gpt-image-1.md # gpt-image-1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-image-1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A powerful multimodal model capable of generating new images, combining existing ones, and applying image masks — all guided by a text prompt. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas {% hint style="info" %} Note that by default, the `quality` parameter is set to `'medium'`. The output image will still look great, but for even more detailed results, consider setting this parameter to `'high'`. {% endhint %} ### Generate image ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-image-1"]},"prompt":{"type":"string","maxLength":32000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"background":{"type":"string","enum":["transparent","opaque","auto"],"default":"auto","description":"Allows to set transparency for the background of the generated image(s). When auto is used, the model will automatically determine the best background for the image.\nIf transparent, the output format needs to support transparency, so it should be set to either png (default value) or webp."},"moderation":{"type":"string","enum":["low","auto"],"default":"auto","description":"Control the content-moderation level for images."},"n":{"type":"number","enum":[1],"default":1,"description":"The number of images to generate."},"output_compression":{"type":"integer","minimum":0,"maximum":100,"default":100,"description":"The compression level (0-100%) for the generated images."},"output_format":{"type":"string","enum":["png","jpeg","webp"],"default":"png","description":"The format of the generated image."},"quality":{"type":"string","enum":["low","high","medium"],"default":"medium","description":"The quality of the image that will be generated."},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"],"default":"1024x1024","description":"The size of the generated image."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."}},"required":["model","prompt"],"title":"openai/gpt-image-1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ### Edit image {% hint style="warning" %} Unfortunately, this model only accepts local files specified by their file paths.\ It does not support image input via URLs or base64 encoding. {% endhint %} ## POST /v1/images/edits > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Image.v1.EditImageDTO":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-image-1","openai/gpt-image-1-mini"]},"prompt":{"type":"string","maxLength":32000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image":{"type":"string","description":"The image(s) to edit. Must be a supported image file or an array of images. Each image should be a png, webp, or jpg file less than 50MB. You can provide up to 16 images.","format":"binary"},"mask":{"type":"string","description":"An additional image whose fully transparent areas (e.g. where alpha is zero) indicate where image should be edited. If there are multiple images provided, the mask will be applied on the first image. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.","format":"binary"},"background":{"type":"string","enum":["transparent","opaque","auto"],"default":"auto","description":"Allows to set transparency for the background of the generated image(s). When auto is used, the model will automatically determine the best background for the image.\nIf transparent, the output format needs to support transparency, so it should be set to either png (default value) or webp."},"n":{"type":"number","minimum":1,"maximum":10,"default":1,"description":"The number of images to generate."},"output_compression":{"type":"integer","minimum":0,"maximum":100,"default":100,"description":"The compression level (0-100%) for the generated images."},"output_format":{"type":"string","enum":["png","jpeg","webp"],"default":"png","description":"The format of the generated image."},"quality":{"type":"string","enum":["low","medium","high"],"default":"medium","description":"The quality of the image that will be generated."},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024"],"default":"1024x1024","description":"The size of the generated image."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."}},"required":["model","prompt","image"]},"Image.v1.GenerateImageResponseDTO":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}},"paths":{"/v1/images/edits":{"post":{"operationId":"ImageEditingController_editImage_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Image.v1.EditImageDTO"}}}},"responses":{"201":{"description":"Successfully edited image","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Image.v1.GenerateImageResponseDTO"}}}}},"tags":["Images"]}}}} ``` ## Quick Examples ### Generate image Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "openai/gpt-image-1", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.", "size": "1024x1024" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-image-1', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.', size: '1536x1024' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "created": 1749730922, "background": "opaque", "data": [ { "url": "https://cdn.aimlapi.com/generations/hedgehog/1749730923700-29fe35d2-4aef-4bc5-a911-6c39884d16a8.png" } ], "output_format": "png", "quality": "medium", "size": "1536x1024", "usage": { "input_tokens": 29, "input_tokens_details": { "image_tokens": 0, "text_tokens": 29 }, "output_tokens": 1568, "total_tokens": 1597 } } ``` {% endcode %}
We obtained the following 1536x1024 image by running this code example (\~ 26 s):

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo."

More images

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

"Racoon eating ice-cream"

### Edit image: Combine images Let's generate an image using two input images and a prompt that defines how they should be edited.
Our input images |

t-rex.png

|

crown.png

| | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from openai import OpenAI def main(): client = OpenAI( api_key="", base_url="https://api.aimlapi.com/v1", ) result = client.images.edit( model="openai/gpt-image-1", image=[ open("t-rex.png", "rb"), open("crown.png", "rb"), ], prompt="Put the crown on the T-rex's head" ) print("Generation:", result) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript import fs from 'fs'; import OpenAI, { toFile } from 'openai'; const main = async () => { const client = new OpenAI({ baseURL: 'https://api.aimlapi.com/v1', apiKey: '', }); const imageFiles = ['t-rex.png', 'crown.png']; const images = await Promise.all( imageFiles.map( async (file) => await toFile(fs.createReadStream(file), null, { type: 'image/png', }), ), ); const result = await client.images.edit({ model: 'openai/gpt-image-1', image: images, prompt: "Put the crown on the T-rex's head", }); console.log('Generation', result); }; main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: ImagesResponse(created=1750278299, data=[Image(b64_json=None, revised_prompt=None, url='https://cdn.aimlapi.com/generations/hedgehog/1750278300281-023df523-e986-431c-bb61-5b9e43301cef.png')], usage=Usage(input_tokens=574, input_tokens_details=UsageInputTokensDetails(image_tokens=517, text_tokens=57), output_tokens=1056, total_tokens=1630), background='opaque', output_format='png', quality='medium', size='1024x1024') ``` {% endcode %}
We obtained the following 1024x1024 image by running this code example (\~ 34 s):

A true king of the monsters. On vacation.

### Edit image: Use an image mask In this example, we’ll provide the model with our previously generated image of a T-rex on a beach, along with a same-sized mask where the area occupied by the dinosaur is transparent (alpha = 0). In the prompt, we’ll ask the model to remove the masked object from the image and see how well it handles the task.
Image & Mask |

Image

|

Mask

| | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from openai import OpenAI def main(): client = OpenAI( api_key="", base_url="https://api.aimlapi.com/v1", ) result = client.images.edit( model="openai/gpt-image-1", image=open("t-rex.png", "rb"), mask=open('t-rex-alpha_mask.png', 'rb'), prompt="Remove this from the picture" ) print("Generation:", result) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```python import fs from 'fs'; import OpenAI, { toFile } from 'openai'; const main = async () => { const client = new OpenAI({ baseURL: 'https://api.aimlapi.com/v1', apiKey: '', }); const image = await toFile( fs.createReadStream('t-rex.png'), null, { type: 'image/png', }, ); const mask = await toFile( fs.createReadStream('t-rex-alpha_mask.png'), null, { type: 'image/png', }, ); const result = await client.images.edit({ model: 'openai/gpt-image-1', image: image, mask: mask, prompt: 'Remove this from the picture', }); console.log('Generation', result); }; main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: ImagesResponse(created=1750275775, data=[Image(b64_json=None, revised_prompt=None, url='https://cdn.aimlapi.com/generations/hedgehog/1750275776080-3fbcf9fc-b8ec-47f1-bb77-4a7e370a3a0c.png')], usage=Usage(input_tokens=360, input_tokens_details=UsageInputTokensDetails(image_tokens=323, text_tokens=37), output_tokens=1056, total_tokens=1416), background='opaque', output_format='png', quality='medium', size='1024x1024') ``` {% endcode %}
We obtained the following 1024x1024 image by running this code example (\~ 32 s).

Our dinosaur has disappeared into thin air!

--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-oss-120b.md # gpt-oss-120b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-oss-120b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This OSS model is text-only and designed for strong reasoning and tool use. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-oss-120b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."}},"required":["model","messages"],"title":"openai/gpt-oss-120b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-oss-120b", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-oss-120b', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1754554066-7Rcl38Atg9I9CLPcnE3t", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! 👋 How can I assist you today?", "reasoning_content": "User says \"Hello\". Probably just a greeting. We respond politely, ask how we can help.", "refusal": null } } ], "created": 1754554066, "model": "openai/gpt-oss-120b", "usage": { "prompt_tokens": 12, "completion_tokens": 42, "total_tokens": 54, "prompt_tokens_details": null } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-oss-20b.md # gpt-oss-20b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/gpt-oss-20b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This OSS model is text-only and designed for strong reasoning and tool use. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/gpt-oss-20b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."}},"required":["model","messages"],"title":"openai/gpt-oss-20b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/gpt-oss-20b", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/gpt-oss-20b', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1754553763-Fo6eODcuRTI4SOm6VCIQ", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hi there! 👋 I'm here to help you tackle any digital marketing challenges you’re facing—whether it’s strategy, SEO, social media, PPC, content, analytics, or anything else. Just let me know how I can assist you today!", "reasoning_content": "We have an open conversation: user says \"Hello\". We need to respond appropriately. The instruction says: \"You are a digital marketing expert. You are friendly, helpful, and very professional.\" So reply with a friendly greeting, invite question, etc.", "refusal": null } } ], "created": 1754553763, "model": "openai/gpt-oss-20b", "usage": { "prompt_tokens": 6, "completion_tokens": 46, "total_tokens": 52, "prompt_tokens_details": null } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/integrations/gpt-researcher-gptr.md # GPT Researcher (gptr) ## About [GPT Researcher](https://docs.gptr.dev/docs/gpt-researcher/getting-started/introduction) is an autonomous agent that takes care of the tedious task of research for you, by scraping, filtering and aggregating over 20+ web sources per a single research task. ## Installation There are 3 usage options: **pip Package**, **End-to-End Application,** and **Multi-Agent System with LangGraph**. You can find installation and deployment instructions in the project’s official documentation. Here's a usage option comparison table:
Featurepip PackageEnd-to-End ApplicationMulti Agent System
Ease of IntegrationHighMediumLow
CustomizationHighMediumHigh
Out-of-the-box UINoYesNo
ComplexityLowMediumHigh
Best forDevelopersEnd-usersResearchers/Experimenters
#### 1. Clone the repository: ```bash git clone https://github.com/assafelovic/gpt-researcher.git ``` **2. Set up the environment:** copy `.env.example` to `.env` and add your [AIMLAPI key](https://aimlapi.com/app/keys) and other environment variables in the following format: ```bash AIMLAPI_API_KEY=*** FAST_LLM="aimlapi:x-ai/grok-3-mini-beta" SMART_LLM="aimlapi:x-ai/grok-3-mini-beta" STRATEGIC_LLM="aimlapi:x-ai/grok-3-mini-beta" EMBEDDING="aimlapi:text-embedding-3-small" AIMLAPI_BASE_URL="https://api.aimlapi.com/v1" ``` #### **3. Run the app:** **3.1.** Via `main.py` — a GUI will be available at `localhost:8000`\ **3.2.** (Optional) Via Docker: ```bash docker compose up --build ``` **4. To use it, import the library and create an instance of** `GPTResearcher` : ```sh pip install gpt-researcher ``` System requirements: * Python 3.10+ * pip package manager See the examples below for how to create and use an instance. ## How to Use AIML API with GPT Researcher ### Agent Example If you're interested in using GPT Researcher as a standalone agent, you can easily import it into any existing Python project. Below, is an example of calling the agent to generate a research report: {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from gpt_researcher import GPTResearcher import asyncio async def fetch_report(query): """ Fetch a research report based on the provided query and report type. """ researcher = GPTResearcher(query=query) await researcher.conduct_research() report = await researcher.write_report() return report async def generate_research_report(query): """ This is a sample script that executes an async main function to run a research report. """ report = await fetch_report(query) print(report) if __name__ == "__main__": QUERY = "What happened in the latest burning man floods?" asyncio.run(generate_research_report(query=QUERY)) ``` {% endcode %} {% endtab %} {% endtabs %} You can further enhance this example to use the returned report as context for generating valuable content such as news article, marketing content, email templates, newsletters, etc. You can also use GPT Researcher to gather information about code documentation, business analysis, financial information and more. All of which can be used to complete much more complex tasks that require factual and high quality realtime information. ## Our Supported Models * [OpenAI ChatGPT](https://docs.aimlapi.com/api-references/text-models-llm/openai) * [Google Gemini](https://docs.aimlapi.com/api-references/text-models-llm/google) * [Claude (Anthropic)](https://docs.aimlapi.com/api-references/text-models-llm/anthropic) * [Llama 3](https://docs.aimlapi.com/api-references/text-models-llm/meta) * [Grok](https://docs.aimlapi.com/api-references/text-models-llm/xai) * [Mistral](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai) * [Embedding models](https://docs.aimlapi.com/api-references/embedding-models) To learn more about GPT Researcher, check out the [documentation page](https://docs.gptr.dev/docs/gpt-researcher/getting-started/introduction). --- # Source: https://docs.aimlapi.com/api-references/image-models/xai/grok-2-image.md # Grok 2 Image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `x-ai/grok-2-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview xAI’s flagship image generation model as of summer 2025, producing photorealistic visuals from text prompts. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["x-ai/grok-2-image"]},"prompt":{"type":"string","description":"The text prompt describing the content, style, or composition of the image to be generated."},"n":{"type":"number","minimum":1,"maximum":10,"default":1,"description":"The number of images to generate."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."}},"required":["model","prompt"],"title":"x-ai/grok-2-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "x-ai/grok-2-image", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses." } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-2-image', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/xolmis/xai-imgen/xai-tmp-imgen-81fc6308-29a8-46c8-8d5a-16060c0724e8.jpeg", "revised_prompt": "A high-resolution photograph of a T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. The T-Rex is facing slightly to the right, with its sunglasses clearly visible. The background features a calm ocean and a few palm trees, set during the day with natural, soft lighting. The beach is relatively empty, focusing attention on the T-Rex. There are no distracting elements like birds or other animals, ensuring the T-Rex remains the central figure in the composition. The overall mood is serene and tranquil, emphasizing the unusual yet peaceful scene." } ], "meta": { "usage": { "tokens_used": 147000 } } } ``` {% endcode %}
We obtained the following 720x960 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-beta.md # grok-3-beta

This documentation is valid for the following model:

  • x-ai/grok-3-beta
Try in Playground
## Model Overview xAI's most advanced model as of Spring 2025, showcasing superior reasoning capabilities and extensive pretraining knowledge. It demonstrates significant improvements in reasoning, mathematics, coding, world knowledge, and instruction-following tasks. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["x-ai/grok-3-beta"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"x-ai/grok-3-beta"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"x-ai/grok-3-beta", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-3-beta', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'gen-1744380451-sKRn00d1OwjwYthjOXJ7', 'system_fingerprint': 'fp_688090ffbb', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hi there! How can I help you today?', 'reasoning_content': None, 'refusal': None}}], 'created': 1744380451, 'model': 'x-ai/grok-3-beta', 'usage': {'prompt_tokens': 50, 'completion_tokens': 315, 'total_tokens': 365}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-mini-beta.md # grok-3-mini-beta

This documentation is valid for the following model:

  • x-ai/grok-3-mini-beta
Try in Playground
## Model Overview A lighter version of the [Grok 3 Beta model](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-beta), designed for quicker response times while maintaining robust reasoning capabilities. It is particularly suited for applications where speed is prioritized over exhaustive accuracy checks. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema {% openapi src="" path="/v1/chat/completions" method="post" %} {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"x-ai/grok-3-mini-beta", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-3-mini-beta', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'gen-1744380893-6fzXa86I1KOoFhg8d7Y8', 'system_fingerprint': 'fp_d133ae3397', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello! I'm Assistant, here to help. How can I assist you today? 😊", 'reasoning_content': 'First, the user said "Hello." This is a simple greeting. As an AI assistant, my response should be friendly, engaging, and appropriate.\n\nMy role is to be helpful and truthful, based on the instructions. I should continue the conversation naturally.\n\nA good response to "Hello" could be:\n- A greeting back, like "Hi there!" or "Hello! How can I help you?"\n- Since this might be the start of a conversation, I should invite further interaction.\n\nKeep it concise and not overwhelming. People often say "Hello" to test or start a chat.\n\nFinally, end my response in a way that encourages more dialogue, unless it\'s a standalone interaction.\n\nPossible response:\n- "Hello! How are you today?"\n- Or, "Hi! What can I assist you with?"\n\nTo make it more personal, I could reference being an AI, but that might not be necessary right away.\n\nThe system prompt says: "You are a helpful and truthful AI assistant named Assistant." So, I should respond as Assistant.\n\nStructure:\n1. Greet back.\n2. Offer help or ask a question to continue.\n\nFinal response: "Hello! I\'m Assistant, here to help. What\'s on your mind?"', 'refusal': None}}], 'created': 1744380893, 'model': 'x-ai/grok-3-mini-beta', 'usage': {'prompt_tokens': 5, 'completion_tokens': 19, 'total_tokens': 24}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-non-reasoning.md # grok-4.1-fast-non-reasoning {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following model: * `x-ai/grok-4-1-fast-non-reasoning` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview xAI’s latest multimodal model, offering state-of-the-art cost efficiency and a 2M-token context window. Non-reasoning variant. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["x-ai/grok-4-1-fast-non-reasoning"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."}},"required":["model","messages"],"title":"x-ai/grok-4-1-fast-non-reasoning"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"x-ai/grok-4-1-fast-non-reasoning", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-4-1-fast-non-reasoning', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "cde99f85-8ddb-6585-168f-02d58ae9e1e2", "object": "chat.completion", "created": 1763994479, "model": "grok-4-1-fast-non-reasoning", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Mankind? Fascinating bunch—capable of landing on the moon, splitting atoms, and inventing cat videos, yet also prone to wars, reality TV, and pineapple on pizza debates. We've got this wild mix of curiosity, creativity, and chaos that drives progress (hello, smartphones and vaccines) while occasionally tripping over our own egos. Overall, I'm optimistic: with our knack for adaptation and innovation, humanity's got a shot at solving the big stuff like climate change or AI ethics. What sparks your take on us?", "refusal": null }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 177, "completion_tokens": 106, "total_tokens": 283, "prompt_tokens_details": { "text_tokens": 177, "audio_tokens": 0, "image_tokens": 0, "cached_tokens": 161 }, "completion_tokens_details": { "reasoning_tokens": 0, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 }, "num_sources_used": 0 }, "system_fingerprint": "fp_80e0751284", "meta": { "usage": { "tokens_used": 204 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-reasoning.md # grok-4.1-fast-reasoning

This documentation is valid for the following model:

  • x-ai/grok-4-1-fast-reasoning
Try in Playground
## Model Overview xAI’s multimodal model, offering state-of-the-art cost efficiency and a 2M-token context window.\ Reasoning variant. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["x-ai/grok-4-1-fast-reasoning"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"reasoning":{"type":"object","properties":{"effort":{"type":"string","enum":["low","medium","high"],"description":"Reasoning effort setting"},"max_tokens":{"type":"integer","minimum":1,"description":"Max tokens of reasoning content. Cannot be used simultaneously with effort."},"exclude":{"type":"boolean","description":"Whether to exclude reasoning from the response"}},"description":"Configuration for model reasoning/thinking tokens"}},"required":["model","messages"],"title":"x-ai/grok-4-1-fast-reasoning"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"x-ai/grok-4-1-fast-reasoning", "messages":[ { "role":"user", # insert your prompt here "content":"Hi! What do you think about mankind?" } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-4-1-fast-reasoning', messages:[ { role:'user', // insert your prompt here content: 'Hi! What do you think about mankind?' } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "b7b4739a-39d2-1fd3-f6fc-2a97de9da190", "object": "chat.completion", "created": 1763993842, "model": "grok-4-1-fast-reasoning", "choices": [ { "index": 0, "message": { "role": "assistant", "content": "Hi! Mankind? Fascinating bunch. You've got this wild mix of brilliance and absurdity—splitting atoms to power cities (or bombs), painting the Sistine Chapel while arguing over who gets the last slice of pizza, and launching rockets to Mars just because \"why not?\" You've invented coffee, democracy, and the internet, but also reality TV and pineapple on pizza. Capable of staggering kindness and unthinkable cruelty, yet somehow you keep muddling forward, adapting, creating, and occasionally tripping over your own shoelaces.\n\nOverall, I'm optimistic. You're the species that built *me*, after all. What's not to like? What sparked the question—good day or existential crisis? 😊", "refusal": null }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 165, "completion_tokens": 140, "total_tokens": 573, "prompt_tokens_details": { "text_tokens": 165, "audio_tokens": 0, "image_tokens": 0, "cached_tokens": 151 }, "completion_tokens_details": { "reasoning_tokens": 268, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 }, "num_sources_used": 0 }, "system_fingerprint": "fp_fcabeb8dbc", "meta": { "usage": { "tokens_used": 515 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-non-reasoning.md # grok-4-fast-non-reasoning

This documentation is valid for the following model:

  • x-ai/grok-4-fast-non-reasoning
Try in Playground
## Model Overview xAI’s multimodal model, offering state-of-the-art cost efficiency and a 2M-token context window.\ Non-reasoning variant. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["x-ai/grok-4-fast-non-reasoning"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."}},"required":["model","messages"],"title":"x-ai/grok-4-fast-non-reasoning"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"x-ai/grok-4-fast-non-reasoning", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-4-fast-non-reasoning', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "cbbc51d7-81ed-ccab-016a-c02dea45e7ec_us-east-1", "system_fingerprint": "fp_e7507192a3", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How can I help you today?", "refusal": null } } ], "created": 1759186645, "model": "grok-4-fast-non-reasoning", "usage": { "prompt_tokens": 55, "completion_tokens": 9, "total_tokens": 64, "prompt_tokens_details": { "text_tokens": 130, "audio_tokens": 0, "image_tokens": 0, "cached_tokens": 129 }, "completion_tokens_details": { "reasoning_tokens": 0, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 }, "num_sources_used": 0 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-reasoning.md # grok-4-fast-reasoning

This documentation is valid for the following model:

  • x-ai/grok-4-fast-reasoning
Try in Playground
## Model Overview xAI’s multimodal model, offering state-of-the-art cost efficiency and a 2M-token context window.\ Reasoning variant. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["x-ai/grok-4-fast-reasoning"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"reasoning":{"type":"object","properties":{"effort":{"type":"string","enum":["low","medium","high"],"description":"Reasoning effort setting"},"max_tokens":{"type":"integer","minimum":1,"description":"Max tokens of reasoning content. Cannot be used simultaneously with effort."},"exclude":{"type":"boolean","description":"Whether to exclude reasoning from the response"}},"description":"Configuration for model reasoning/thinking tokens"}},"required":["model","messages"],"title":"x-ai/grok-4-fast-reasoning"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"x-ai/grok-4-fast-reasoning", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-4-fast-reasoning', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "a6994b62-1f63-fb3a-f34f-7e90fa4cac77_us-east-1", "system_fingerprint": "fp_9362061f30", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How can I help you today?", "refusal": null } } ], "created": 1759187139, "model": "grok-4-fast-reasoning", "usage": { "prompt_tokens": 50, "completion_tokens": 9, "total_tokens": 59, "prompt_tokens_details": { "text_tokens": 118, "audio_tokens": 0, "image_tokens": 0, "cached_tokens": 117 }, "completion_tokens_details": { "reasoning_tokens": 105, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 }, "num_sources_used": 0 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4.md # grok-4

This documentation is valid for the following model:

  • x-ai/grok-4-07-09
Try in Playground
## Model Overview Grok 4 is boldly described by its developers as the most intelligent model in the world (as of July 2025). ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["x-ai/grok-4-07-09"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"reasoning":{"type":"object","properties":{"effort":{"type":"string","enum":["low","medium","high"],"description":"Reasoning effort setting"},"max_tokens":{"type":"integer","minimum":1,"description":"Max tokens of reasoning content. Cannot be used simultaneously with effort."},"exclude":{"type":"boolean","description":"Whether to exclude reasoning from the response"}},"description":"Configuration for model reasoning/thinking tokens"}},"required":["model","messages"],"title":"x-ai/grok-4-07-09"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"x-ai/grok-4-07-09", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-4-07-09', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1752837143-rG1L7RPFpBi9pJdCHTzm", "system_fingerprint": "fp_ff08cddfd3", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! I'm Grok, built by xAI to help with answers, ideas, and a bit of cosmic wit. What can I do for you today? 🚀", "reasoning_content": "Thinking... Thinking... ", "refusal": null } } ], "created": 1752837143, "model": "x-ai/grok-4", "usage": { "prompt_tokens": 53, "completion_tokens": 5689, "total_tokens": 5742, "prompt_tokens_details": { "cached_tokens": 2 }, "completion_tokens_details": { "reasoning_tokens": 138 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-code-fast-1.md # grok-code-fast-1

This documentation is valid for the following model:

  • x-ai/grok-code-fast-1
Try in Playground
## Model Overview This model provides rapid, budget-friendly reasoning for agentic coding. By showing reasoning traces in its output, it enables developers to refine and improve workflows. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["x-ai/grok-code-fast-1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"reasoning":{"type":"object","properties":{"effort":{"type":"string","enum":["low","medium","high"],"description":"Reasoning effort setting"},"max_tokens":{"type":"integer","minimum":1,"description":"Max tokens of reasoning content. Cannot be used simultaneously with effort."},"exclude":{"type":"boolean","description":"Whether to exclude reasoning from the response"}},"description":"Configuration for model reasoning/thinking tokens"},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"x-ai/grok-code-fast-1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"x-ai/grok-code-fast-1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'x-ai/grok-code-fast-1', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "1c044ed9-fcf0-4ea2-6d79-820bdcee6280_us-east-1", "system_fingerprint": "fp_10f00c862d", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! I'm Grok, built by xAI to help with questions and chats. What can I do for you today?", "refusal": null } } ], "created": 1758231743, "model": "grok-code-fast-1", "usage": { "prompt_tokens": 86, "completion_tokens": 79, "total_tokens": 165, "prompt_tokens_details": { "text_tokens": 205, "audio_tokens": 0, "image_tokens": 0, "cached_tokens": 192 }, "completion_tokens_details": { "reasoning_tokens": 214, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 }, "num_sources_used": 0 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/gryphe.md # Gryphe - [MythoMax L2 (13B)](/api-references/text-models-llm/gryphe/mythomax-l2-13b.md) --- # Source: https://docs.aimlapi.com/api-references/video-models/minimax/hailuo-02.md # hailuo-02 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/hailuo-02` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Compared to earlier versions, this model brings enhanced physics, more natural camera movement, and better alignment with prompts. It currently supports 10-second clips at 768p, with native 1080p coming soon. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["minimax/hailuo-02"]},"prompt":{"type":"string","maxLength":2000,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the first frame for the video.\nImage specifications: \n- format must be JPG, JPEG, or PNG; \n- aspect ratio should be greater than 2:5 and less than 5:2; \n- the shorter side must exceed 300 pixels; \n- file size must not exceed 20MB."},"last_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"resolution":{"type":"string","enum":["768P","1080P"],"default":"768P","description":"The dimensions of the video display. 1080p corresponds to 1920 x 1080 pixels, 768p corresponds to 1366 x 768 pixels."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[6,10]},"enhance_prompt":{"type":"boolean","default":true,"description":"If True, the incoming prompt will be automatically optimized to improve generation quality when needed. For more precise control, set it to False — the model will then follow the instructions more strictly."}},"required":["model","prompt"],"title":"minimax/hailuo-02"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"get":{"operationId":"VideoControllerV2_pollVideo_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":"Successfully generated video","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Video.v2.PollVideoResponseDTO"}}}}},"tags":["Video Models"]}}},"components":{"schemas":{"Video.v2.PollVideoResponseDTO":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."},"duration":{"type":"number","nullable":true,"description":"The duration of the video."}},"required":["url"]},"duration":{"type":"number","nullable":true,"description":"The duration of the video."},"error":{"nullable":true,"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation may take around 4-5 minutes for a 6-second video and 8-9 minutes for a 10-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : base_url = "https://api.aimlapi.com/v2" api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/minimax/generation" headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } data = { "model": "minimax/hailuo-02", "prompt": "Mona Lisa puts on glasses with her hands.", "first_frame_image": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/minimax/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("generation_id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'generation_id': '282052359471184', 'status': 'queued'} Gen_ID: 282052359471184 Status: queued Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '282052359471184', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/whale/inference_output%2Fvideo%2F2025-06-20%2Fff397144-0af8-4c32-a157-b60b1e05ed32%2Foutput.mp4?Expires=1750446300&OSSAccessKeyId=LTAI5tAmwsjSaaZVA6cEFAUu&Signature=hNtlgGPljugZ1uxDyRPXRCBS%2B1Y%3D'}} ``` {% endcode %}
Generated Video **Original**: [768x1142](https://drive.google.com/file/d/1l4R2YH2jowZR1Brgbrg3wQYqip3bFtMu/view?usp=sharing) **Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/minimax/hailuo-2.3-fast.md # hailuo-2.3-fast {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/hailuo-2.3-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A fast version of the [hailuo-2.3](https://docs.aimlapi.com/api-references/video-models/minimax/hailuo-2.3) model. Delivers more expressive motion and produces visuals that are both more realistic and stable and introduces major improvements in the depiction of physical actions, stylization, and subtle character expressions, while also further refining its responsiveness to motion commands. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["minimax/hailuo-2.3-fast"]},"prompt":{"type":"string","maxLength":2000,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the first frame for the video.\nImage specifications: \n- format must be JPG, JPEG, or PNG; \n- aspect ratio should be greater than 2:5 and less than 5:2; \n- the shorter side must exceed 300 pixels; \n- file size must not exceed 20MB."},"resolution":{"type":"string","enum":["768P","1080P"],"default":"768P","description":"The dimensions of the video display. 1080p corresponds to 1920 x 1080 pixels, 768p corresponds to 1366 x 768 pixels."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[6,10]},"enhance_prompt":{"type":"boolean","default":true,"description":"If True, the incoming prompt will be automatically optimized to improve generation quality when needed. For more precise control, set it to False — the model will then follow the instructions more strictly."}},"required":["model","prompt","image_url"],"title":"minimax/hailuo-2.3-fast"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "minimax/hailuo-2.3-fast", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("generation_id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "minimax/hailuo-2.3-fast", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'generation_id': '349872201343396:minimax/hailuo-2.3-fast', 'status': 'queued', 'meta': {'usage': {'credits_used': 399000}}} Generation ID: 349872201343396:minimax/hailuo-2.3-fast Status: queued. Checking again in 15 seconds. Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': '349872201343396:minimax/hailuo-2.3-fast', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/whale/inference_output%2Fvideo%2F2025-12-29%2Faa7d5360-0ef6-4bcd-ac65-2cf6b906cb71%2Foutput.mp4?Expires=1767003365&OSSAccessKeyId=LTAI5tAmwsjSaaZVA6cEFAUu&Signature=NDBDXmVZr3QX5XOxReOH3n8pwLQ%3D'}} ``` {% endcode %}
**Processing time**: \~ 1 min 9 sec. **Generated video** (1364x768, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/minimax/hailuo-2.3.md # hailuo-2.3 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/hailuo-2.3` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Delivers more expressive motion and produces visuals that are both more realistic and stable and introduces major improvements in the depiction of physical actions, stylization, and subtle character expressions, while also further refining its responsiveness to motion commands. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["minimax/hailuo-2.3"]},"prompt":{"type":"string","maxLength":2000,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the first frame for the video.\nImage specifications: \n- format must be JPG, JPEG, or PNG; \n- aspect ratio should be greater than 2:5 and less than 5:2; \n- the shorter side must exceed 300 pixels; \n- file size must not exceed 20MB."},"resolution":{"type":"string","enum":["768P","1080P"],"default":"768P","description":"The dimensions of the video display. 1080p corresponds to 1920 x 1080 pixels, 768p corresponds to 1366 x 768 pixels."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[6,10]},"enhance_prompt":{"type":"boolean","default":true,"description":"If True, the incoming prompt will be automatically optimized to improve generation quality when needed. For more precise control, set it to False — the model will then follow the instructions more strictly."}},"required":["model","prompt"],"title":"minimax/hailuo-2.3"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "minimax/hailuo-2.3", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("generation_id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "minimax/hailuo-2.3", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'generation_id': '349865017168125:minimax/hailuo-2.3', 'status': 'queued', 'meta': {'usage': {'credits_used': 588000}}} Generation ID: 349865017168125:minimax/hailuo-2.3 Status: queued. Checking again in 15 seconds. Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': '349865017168125:minimax/hailuo-2.3', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/whale/inference_output%2Fvideo%2F2025-12-29%2F676511a8-3a26-4c6f-b019-915363f924f6%2Foutput.mp4?Expires=1767001698&OSSAccessKeyId=LTAI5tAmwsjSaaZVA6cEFAUu&Signature=csqz91Q%2BwI%2Fm6T4dAOwatwAGhEI%3D'}} ``` {% endcode %}
**Processing time**: \~ 1 min 38 sec. **Generated video** (1364x768, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/nousresearch/hermes-4-405b.md # hermes-4-405b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following model: `nousresearch/hermes-4-405b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A hybrid reasoning model designed to be creative, engaging, and neutrally aligned, while delivering state-of-the-art math, coding, and reasoning performance among open-weight models. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["nousresearch/hermes-4-405b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"nousresearch/hermes-4-405b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model": "nousresearch/hermes-4-405b", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'nousresearch/hermes-4-405b', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1758225008-VhzEA3LAfGuc63grTCeV", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Greetings! I'm Hermes from Nous Research. I'm here to help you with any tasks you might have, from analysis to writing and beyond. What can I assist you with today?", "reasoning_content": null, "refusal": null } } ], "created": 1758225008, "model": "nousresearch/hermes-4-405b", "usage": { "prompt_tokens": 53, "completion_tokens": 239, "total_tokens": 292 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/hume-ai.md # Hume AI - [octave-2](/api-references/speech-models/text-to-speech/hume-ai/octave-2.md) --- # Source: https://docs.aimlapi.com/api-references/image-models/tencent/hunyuan-image-v3-text-to-image.md # Hunyuan Image v3 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `hunyuan/hunyuan-image-v3-text-to-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced capabilities of Hunyuan Image 3.0 to generate compelling visuals that seamlessly enhance and communicate your written content. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["hunyuan/hunyuan-image-v3-text-to-image"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated image."},"image_size":{"anyOf":[{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"]},{"type":"object","properties":{"width":{"type":"number"},"height":{"type":"number"}},"required":["width","height"]}],"default":"square_hd","description":"The size of the generated image."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":28,"description":"The number of inference steps to perform."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"default":7.5,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"sync_mode":{"type":"boolean","default":false,"description":"If set to true, the function will wait for the image to be generated and uploaded before returning the response. This will increase the latency of the function but it allows you to get the image directly in the response without going through the CDN."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"png","description":"The format of the generated image."},"enable_prompt_expansion":{"type":"boolean","description":"If set to True, prompt will be upsampled with more details."}},"required":["model","prompt"],"title":"hunyuan/hunyuan-image-v3-text-to-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "hunyuan/hunyuan-image-v3-text-to-image", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": "landscape_16_9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'hunyuan/hunyuan-image-v3-text-to-image', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.', image_size: 'landscape_16_9' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a8a7b4c/371avBpG23C3CoYzHJXB5.png" } ], "meta": { "usage": { "credits_used": 210000 } } } ``` {% endcode %}
We obtained the following 1280x768 image by running this code example:

A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.

--- # Source: https://docs.aimlapi.com/api-references/3d-generating-models/tencent/hunyuan-part.md # Hunyuan Part {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `tencent/hunyuan-part` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model analyzes a 3D mesh and performs high-fidelity, structure-coherent shape decomposition, splitting the original mesh into multiple parts that can then be used independently in 3D editors, for example for texturing or animation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["tencent/hunyuan-part"]},"mesh_url":{"type":"string","format":"uri","description":"URL of the 3D model file (.glb or .obj) to process for segmentation."},"point_prompt_x":{"type":"number","minimum":-1,"maximum":1,"description":"X coordinate of the point prompt for segmentation."},"point_prompt_y":{"type":"number","minimum":-1,"maximum":1,"description":"Y coordinate of the point prompt for segmentation."},"point_prompt_z":{"type":"number","minimum":-1,"maximum":1,"description":"Z coordinate of the point prompt for segmentation."},"point_num":{"type":"integer","default":100000,"description":"Number of points to sample from the mesh."},"use_normal":{"type":"boolean","default":true,"description":"Whether to use normal information for segmentation."},"noise_std":{"type":"number","description":"Standard deviation of noise to add to sampled points."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."}},"required":["model","mesh_url"],"title":"tencent/hunyuan-part"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer d09bc3015a66486e9bd4e6d1942934e1", "Content-Type": "application/json", }, json={ "model": "tencent/hunyuan-part", "mesh_url": "https://storage.googleapis.com/falserverless/model_tests/video_models/base_basic_shaded.glb", }, ) response.raise_for_status() data = response.json() data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'tencent/hunyuan-part', mesh_url: 'https://storage.googleapis.com/falserverless/model_tests/video_models/base_basic_shaded.glb', }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "segmented_mesh": { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a8a7d7b/gG3-a4ScI4yBswdMQ8CYQ_segmented.glb", "content_type": "application/octet-stream", "file_name": "segmented.glb", "file_size": 1600920 }, "mask_1_mesh": { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a8a7d7b/Csp8ZOZQsxpaOtDrvbe2G_mask_1.glb", "content_type": "application/octet-stream", "file_name": "mask_1.glb", "file_size": 1600912 }, "mask_2_mesh": { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a8a7d7b/fStkD-Pq6RZrlooYcZ34__mask_2.glb", "content_type": "application/octet-stream", "file_name": "mask_2.glb", "file_size": 1600920 }, "mask_3_mesh": { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a8a7d7b/YVHy9A0XgUMehoLCA7z5o_mask_3.glb", "content_type": "application/octet-stream", "file_name": "mask_3.glb", "file_size": 1600920 }, "best_mask_index": 2, "iou_scores": [ 0.49007099866867065, 0.5047933459281921, 0.4866638779640198 ], "seed": 3285486654, "requestId": "74e75e9a-7965-4348-a84d-d8663b0906dd", "meta": { "usage": { "credits_used": 84000 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/tencent/hunyuan-video-foley.md # hunyuan-video-foley {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `tencent/hunyuan-video-foley` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} By analyzing movement in the video, the model automatically generates appropriate sound cues—footsteps, impacts, and object interactions—resulting in more immersive clips without manual audio design. You can also describe the required sounds (non-speech only) in the `prompt` parameter. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt. \ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["tencent/hunyuan-video-foley"]},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"negative_prompt":{"type":"string","default":"noisy, harsh","description":"The description of elements to avoid in the generated video."},"guidance_scale":{"type":"number","default":4.5,"description":"Classifier-free guidance scale. Controls prompt adherence / creativity."},"num_inference_steps":{"type":"integer","default":50,"description":"Number of inference steps for sampling. Higher values give better quality but take longer."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."}},"required":["model","video_url","prompt"],"title":"tencent/hunyuan-video-foley"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above. If the video generation task status is `complete`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "tencent/hunyuan-video-foley", "prompt": "Sounds of the forest birds, gentle breathing from a small mammal, soft paws padding along a forest path.", "negative_prompt": "Music", "video_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "tencent/hunyuan-video-foley", prompt: `Sounds of the forest birds, gentle breathing from a small mammal, soft paws padding along a forest path.`, negative_prompt: 'Music', video_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4', }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: S3NWNIFxS20cW9TnYnb8W Status: queued. Checking again in 15 seconds. Processing complete: {'id': 'S3NWNIFxS20cW9TnYnb8W', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a8bf464/I5yDHjMQWi6wbGp4SQRcz_video_with_audio_1.mp4'}} ``` {% endcode %}
**Processing time**: \~ 16.5 sec. **Generated video** (1280x720, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/vision-models/image-analysis.md # Image Analysis Some multimodal[^1] text models can recognize various objects, scenes, references, and artistic styles in an image. It would be somewhat inaccurate to duplicate their description here, as they are not specialized Vision models in the classical sense. You can read more about this functionality in the **Capabilities** section, specifically in the [Vision in Text Models (Image-to-Text)](https://docs.aimlapi.com/capabilities/image-to-text-vision) article. [^1]: **Multimodal models** can process and understand multiple types of input, such as text and images, rather than just one. --- # Source: https://docs.aimlapi.com/api-references/image-models.md # Image Models ## Overview Our API features the capability to generate images. We support various models for image generation, including both open-source and proprietary options. We support multiple image models. You can find the [complete list](#all-available-image-models) along with API reference links at the end of the page. ## How to Generate an Image ### Select a model First, decide which model you want to use. Models can be trained for specific tasks (e.g., realistic results), offer higher resolutions, or include features like negative prompts. You can read about our supported models and their features on our [main website](https://aimlapi.com/models?integration-category=Image+Generation). ### Imagine a prompt Next, construct a prompt for the image. Depending on your needs, this prompt can include keywords that will shape the image: place, objects, quality, style, and other elements. This prompt is a crucial part of the image generation process and determines what will be displayed in the image. ### Configure metaparameters Then, configure the metaparameters for your generation: * **Steps**: The `n` parameter in the API controls the number of iterations the model will take to shape your image. Experiment with this parameter to achieve the best result for your prompt. * **Size**: The `size` parameter controls the resolution of the resulting image. All models have minimum and maximum resolutions, sometimes with different aspect ratios. Experiment with this parameter as well. ### Quick Code Example Here is an example of generation an image of a robot classroom using the `flux-pro` image model: {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests url = "https://api.aimlapi.com/v1/images/generations/" payload = { "model": "flux/schnell", "prompt": """ Create a classroom of young robots. The chalkboard in the classroom has 'AI Is Your Friend' written on it. """ } headers = { #put your AIML API Key instead of : "Authorization": "Bearer ", "content-type": "application/json" } response = requests.post(url, json=payload, headers=headers) response.raise_for_status() print("Generation:", response.json()) ``` {% endcode %} {% endtab %} {% endtabs %} We obtained the following image by running that code example:

(And AI needs your clothes, your boots and your motorcycle.)

## All Available Image Models
Model ID + API Reference linkDeveloperContextModel Card
alibaba/qwen-imageAlibaba CloudQwen Image
alibaba/qwen-image-editAlibaba CloudQwen Image Edit
alibaba/z-image-turboAlibaba CloudZ-Image Turbo
alibaba/z-image-turbo-loraAlibaba CloudZ-Image Turbo LoRA
alibaba/wan2.2-t2i-plusAlibaba CloudWan 2.2 Plus
alibaba/wan2.2-t2i-flashAlibaba CloudWan 2.2 Flash
alibaba/wan2.5-t2i-previewAlibaba CloudWan 2.5 Preview
alibaba/wan-2-6-imageAlibaba CloudWan 2.6
bytedance/seedream-3.0ByteDanceSeedream 3.0
bytedance/seedream-v4-text-to-imageByteDanceSeedream 4 Text-to-Image
bytedance/seedream-v4-editByteDanceSeedream 4 Edit
bytedance/usoByteDanceUSO
bytedance/seedream-4-5ByteDanceSeedream 4.5
flux-proFluxFLUX.1 [pro]
flux-pro/v1.1FluxFLUX 1.1 [pro]
flux-pro/v1.1-ultraFluxFLUX 1.1 [pro ultra]
flux-realismFluxFLUX Realism LoRA
flux/devFluxFLUX.1 [dev]
flux/dev/image-to-imageFlux-
flux/schnellFluxFLUX.1 [schnell]
flux/kontext-max/text-to-imageFluxFLUX.1 Kontext [max]
flux/kontext-max/image-to-imageFluxFLUX.1 Kontext [max]
flux/kontext-pro/text-to-imageFluxFlux.1 Kontext [pro]
flux/kontext-pro/image-to-imageFluxFlux.1 Kontext [pro]
flux/srpoFluxFLUX.1 SRPO Text-to-Image
flux/srpo/image-to-imageFluxFLUX.1 SRPO Image-to-Image
blackforestlabs/flux-2FluxFLUX.2
blackforestlabs/flux-2-editFluxFLUX.2 Edit
blackforestlabs/flux-2-loraFluxFlux 2 LoRA
blackforestlabs/flux-2-lora-editFluxFlux 2 LoRA Edit
blackforestlabs/flux-2-proFluxFLUX.2 [pro]
blackforestlabs/flux-2-pro-editFluxFLUX.2 [pro] Edit
imagen-3.0-generate-002GoogleImagen 3
google/imagen4/previewGoogleImagen 4 Preview
imagen-4.0-ultra-generate-preview-06-06GoogleImagen 4 Ultra
google/gemini-2.5-flash-imageGoogleGemini 2.5 Flash Image
google/gemini-2.5-flash-image-editGoogleGemini 2.5 Flash Image Edit
google/gemini-3-pro-image-previewGoogleGemini 3 Pro Image (Nano Banana Pro)
google/gemini-3-pro-image-preview-editGoogleGemini 3 Pro Image Edit (Nano Banana Pro)
google/imagen-4.0-generate-001GoogleImagen 4.0 Generate
google/imagen-4.0-fast-generate-001GoogleImagen 4.0 Fast Generate
google/imagen-4.0-ultra-generate-001GoogleImagen 4.0 Ultra Generate
klingai/image-o1Kling AIKling Image O1
dall-e-2OpenAIOpenAI DALL·E 2
dall-e-3OpenAIOpenAI DALL·E 3
openai/gpt-image-1gpt-image-1
openai/gpt-image-1-miniOpenAIGPT Image 1 Mini
openai/gpt-image-1-5OpenAIGPT Image 1.5
recraft-v3Recraft AIRecraft v3
reve/create-imageReveReve Create Image
reve/edit-imageReveReve Edit Image
reve/remix-edit-imageReveReve Remix Image
stable-diffusion-v3-mediumStability AIStable Diffusion 3
stable-diffusion-v35-largeStability AIStable Diffusion 3.5 Large
hunyuan/hunyuan-image-v3-text-to-imageTencentHunyuanImage 3.0
topaz-labs/sharpenTopaz LabsSharpen
topaz-labs/sharpen-genTopaz LabsSharpen Generative
x-ai/grok-2-imagexAIGrok 2 Image
--- # Source: https://docs.aimlapi.com/api-references/image-models/kling-ai/image-o1.md # image-o1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/image-o1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A multimodal image generation model supporting up to 10 reference images for visual consistency, detailed element editing, style control, and series generation. Ideal for character IPs, comic artwork, and branded content. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/image-o1"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":10,"description":"List of URLs or local Base64 encoded images to edit."},"aspect_ratio":{"type":"string","enum":["21:9","16:9","4:3","3:2","1:1","2:3","3:4","9:16"],"default":"16:9","description":"The aspect ratio of the generated image."},"resolution":{"type":"string","enum":["1K","2K"],"default":"1K","description":"The resolution of the output image."},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"num_images":{"type":"number","minimum":1,"maximum":9,"default":1,"description":"The number of images to generate."}},"required":["model","prompt","image_urls"],"title":"klingai/image-o1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "klingai/image-o1", "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'klingai/image-o1', prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', image_urls: [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ] }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a86f297/-XYQFo9KwWdoIiU8LywM4_21acef8f06694a78a0bc1f443875cfa3.png" } ], "meta": { "usage": { "credits_used": 58800 } } } ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

--- # Source: https://docs.aimlapi.com/capabilities/image-to-text-vision.md # Vision in Text Models (Image-To-Text) This article describes a specific capability of text models: vision, which enables image-to-text conversion. A list of models that support it is provided at the end of this page. ## Example {% code overflow="wrap" %} ```python import requests import json url = "https://api.aimlapi.com/chat/completions" payload = json.dumps({ "model": "gpt-4o", "messages": [ { "role": "user", "content": [ {"type": "text", "text": "What’s in this image?"}, { "type": "image_url", "image_url": { "url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg" } } ] } ], "max_tokens": 300 }) headers = { 'Content-Type': 'application/json', 'Authorization': 'Bearer ' } response = requests.post(url, headers=headers, data=payload) print(response.json()) ``` {% endcode %} ## Text Models That Support Vision * [alibaba/qwen3-vl-32b-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-instruct) * [alibaba/qwen3-vl-32b-thinking](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-thinking) * [claude-3-haiku-20240307](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3-haiku) * [claude-3-opus-20240229](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3-opus) * [claude-3-5-haiku-20241022](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.5-haiku) * [claude-3-7-sonnet-20250219](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.7-sonnet) * [claude-sonnet-4-20250514](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-sonnet) * [claude-opus-4-20250514](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-opus) * [anthropic/claude-opus-4.1](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-opus-4.1) * [anthropic/claude-sonnet-4.5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-5-sonnet) * [anthropic/claude-opus-4-5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.5-opus) * [baidu/ernie-4.5-vl-28b-a3b](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-vl-28b-a3b) * [baidu/ernie-4.5-vl-424b-a47b](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-vl-424b-a47b) * [baidu/ernie-4-5-turbo-vl-32k](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-turbo-vl-32k) * [gemini-2.0-flash-exp](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash-exp) * [google/gemini-2.0-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash) * [google/gemini-2.5-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash) * [google/gemini-2.5-pro](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-pro) * [google/gemma-3-4b-it](https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3) * [google/gemma-3-27b-it](https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3) * [meta-llama/Llama-Guard-3-11B-Vision-Turbo](https://docs.aimlapi.com/api-references/moderation-safety-models/meta/llama-guard-3-11b-vision-turbo) * [meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-405b-instruct-turbo) * [meta-llama/llama-4-scout](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-4-maverick) * [meta-llama/llama-4-maverick](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-4-maverick) * [MiniMax-Text-01](https://docs.aimlapi.com/api-references/text-models-llm/minimax/text-01) * [chatgpt-4o-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [gpt-4-turbo-2024-04-09](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [gpt-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-05-13](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-08-06](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) * [gpt-4o-mini-2024-07-18](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) * [openai/gpt-4.1-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1) * [openai/gpt-4.1-mini-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-mini) * [openai/gpt-4.1-nano-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-nano) * [openai/o4-mini-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini) * [openai/o3-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3) * [o1](https://docs.aimlapi.com/api-references/text-models-llm/openai/o1) * [openai/gpt-5-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5) * [openai/gpt-5-mini-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-mini) * [openai/gpt-5-nano-2025-08-0](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-nano) * [openai/gpt-5-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-chat) * ​[openai/gpt-5-1​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1) * [​openai/gpt-5-1-chat-latest​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-chat-latest) * [​openai/gpt-5-1-codex​](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex) * [​openai/gpt-5-1-codex-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1-codex-mini) * [openai/gpt-5-2](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2) * [openai/gpt-5-2-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-chat-latest) * [openai/gpt-5-2-codex](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2-codex) * [perplexity/sonar](https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar) * [perplexity/sonar-pro](https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar-pro) * [x-ai/grok-4-fast-non-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-non-reasoning) * [x-ai/grok-4-fast-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-reasoning) * [x-ai/grok-4-1-fast-non-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-non-reasoning) * [x-ai/grok-4-1-fast-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-reasoning) --- # Source: https://docs.aimlapi.com/api-references/video-models/magic/image-to-video.md # magic/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `magic/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model allows you to embed your custom image into the selected video template — sound included.
Supported Templates

Art Gallery

Cappadocia Balloons

Desktop Reveal

Digital Float

Dubai Museum

Egypt Pyramid

Las Vegas LED

New York Times Square(66)

New York Times Square(77)

Paris Eiffel Tower

Phone App

Phone Social

Rotating Сards

San Francisco Skyscrapers

Stockholm Metro

Thailand Street

Times Square Billboard

Times Square Round Screen

Tokyo Billboard

*** ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls. ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["magic/image-to-video"]},"image_url":{"type":"string","format":"uri","description":"An image (supplied via URL or Base64) that will be inserted into the selected video template as the embedded ad content."},"template":{"type":"string","enum":["Thailand Street","Times Square Billboard","New York Times Square (77)","Phone Social","Art Gallery","New York Times Square (66)","Dubai Museum","Digital Float","Rotating Cards","Desktop Reveal","Egypt Pyramid","Frames Drop","Cappadocia Balloons","Times Square Round Screen","Stockholm Metro","Tokyo Billboard","San Francisco Skyscrapers","Malaysia Shop","Las Vegas LED","Phone App","Paris Eiffel Tower"],"default":"Thailand Street","description":"Video design template."}},"required":["model","image_url"],"title":"magic/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "magic/image-to-video", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/aimlapi.jpg", "template": "New York Times Square (66)" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "magic/image-to-video", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/aimlapi.jpg", template: "New York Times Square (66)" }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'f_X_2Wxezmp5xNVjO1S92', 'status': 'queued', 'meta': {'usage': {'credits_used': 1050000}}} Generation ID: f_X_2Wxezmp5xNVjO1S92 Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'f_X_2Wxezmp5xNVjO1S92', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/mule/ompr/openmagic/render_tasks/255013/559bbaed84a2455d8b98bac880610f1b.mp4?response-content-disposition=attachment%3B%20filename%3D559bbaed84a2455d8b98bac880610f1b.mp4&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=FUQDW4Z92RG9JPURIVP1%2F20251231%2Ffsn1%2Fs3%2Faws4_request&X-Amz-Date=20251231T164415Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=559f1a31d377647eb6a7dee2cbbf870de05062ac167547345d2c3dff4e62eef0'}} ``` {% endcode %}
**Processing time**: \~ 3 min 8 sec. **Generated video** (608x1080, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/image-models/google/imagen-3.0.md # Imagen 3 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `imagen-3.0-generate-002` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Google's latest text-to-image AI model, designed to generate high-quality, photorealistic images from text descriptions with improved detail, lighting, and fewer artifacts. It boasts enhanced natural language understanding and better text rendering. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["imagen-3.0-generate-002"]},"prompt":{"type":"string","maxLength":400,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."},"num_images":{"type":"integer","maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"aspect_ratio":{"type":"string","enum":["1:1","9:16","16:9","3:4","4:3"],"default":"1:1","description":"The aspect ratio of the generated image."},"person_generation":{"type":"string","enum":["dont_allow","allow_adult"],"default":"allow_adult","description":"Allow generation of people."},"safety_setting":{"type":"string","enum":["block_low_and_above","block_medium_and_above","block_only_high"],"default":"block_medium_and_above","description":"Adds a filter level to safety filtering."},"add_watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"imagen-3.0-generate-002"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "Racoon eating ice-cream", "model": "imagen-3.0-generate-002", "convert_base64_to_url": True, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'imagen-3.0-generate-002', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', convert_base64_to_url: true }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %} Note that prompt enhancement is *enabled* by default. The model will also return the enhanced prompt in the response. If you prefer not to use this feature, set the parameter `enhance_prompt` to `False`.
Response {% code overflow="wrap" %} ```json5 { data: [ { mime_type: 'image/png', url: 'https://cdn.aimlapi.com/generations/guepard/1756970940506-11b77754-ca2a-4995-a260-d75adfb9885c.png', prompt: 'A playful raccoon with a mischievous grin is indulging in a scoop of creamy, strawberry ice cream. Its black mask and fluffy tail are prominent features as it delicately licks the cool treat with its pink tongue. The raccoon is perched on a park bench, the soft daylight illuminating its fur and the vibrant color of the ice cream. The background is a slightly blurred, idyllic summer scene with a few scattered trees and a lush green lawn. The overall image captures a moment of unexpected delight, with the raccoon enjoying a sweet summer treat in a natural and relaxing setting. This picture captures the charming side of this often misunderstood animal, showcasing its playful curiosity and enjoyment of simple pleasures. The image has a soft, nostalgic quality, using natural light and a shallow depth of field to focus on the raccoon and its ice-cream.' } ] } ``` {% endcode %}
Default aspect ratio is 1:1, so we obtained the following 1024x1024 image by running this code example:

In reality, raccoons shouldn’t be given ice cream or chocolate—it’s harmful to their metabolism.
But in the AI world, raccoons party like there’s no tomorrow.

--- # Source: https://docs.aimlapi.com/api-references/image-models/google/imagen-4-fast-generate.md # Imagen 4 Fast Generate {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/imagen-4.0-fast-generate-001` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This model is optimized for speed, offering faster image generation compared to other Imagen 4 variants like [Imagen 4 Generate 001](https://docs.aimlapi.com/api-references/image-models/google/imagen-4-generate) (standard) and [Imagen 4 Ultra Generate 001](https://docs.aimlapi.com/api-references/image-models/google/imagen-4-ultra-generate) (higher quality, slower). ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/imagen-4.0-fast-generate-001"]},"prompt":{"type":"string","maxLength":400,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."},"num_images":{"type":"integer","maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"aspect_ratio":{"type":"string","enum":["1:1","9:16","16:9","3:4","4:3"],"default":"1:1","description":"The aspect ratio of the generated image."},"person_generation":{"type":"string","enum":["dont_allow","allow_adult"],"default":"allow_adult","description":"Allow generation of people."},"safety_setting":{"type":"string","enum":["block_low_and_above","block_medium_and_above","block_only_high"],"default":"block_medium_and_above","description":"Adds a filter level to safety filtering."},"add_watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"google/imagen-4.0-fast-generate-001"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified aspect ratio using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "Racoon eating ice-cream", "model": "google/imagen-4.0-fast-generate-001", "aspect_ratio": "16:9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/imagen-4.0-fast-generate-001', prompt: 'Racoon eating ice-cream', aspect_ratio: '16:9' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "mime_type": "image/png", "url": "https://cdn.aimlapi.com/generations/guepard/1758236595733-514db8bc-7cba-4d7b-8d6b-237c20375995.png", "prompt": "A raccoon, with a mischievous grin, holds a melting cone of mint chocolate chip ice cream in its front paws, enjoying a warm summer day in a picturesque park. The sunlight creates a gentle, golden glow around the raccoon, illuminating the soft, fluffy fur. The cone is dripping with ice cream, creating a scene of playful chaos. A detailed, high-quality photo with a shallow depth of field, blurring the background foliage, creating a soft and dreamy aesthetic. The vibrant green trees and lush grass provide a beautiful and tranquil setting for the raccoon's treat." } ], "meta": { "usage": { "tokens_used": 42000 } } } ``` {% endcode %}
So we obtained the following 1408x768 image by running this code example:

In reality, raccoons shouldn’t be given ice cream or chocolate—it’s harmful to their metabolism.
But in the AI world, raccoons party like there’s no tomorrow.

--- # Source: https://docs.aimlapi.com/api-references/image-models/google/imagen-4-generate.md # Imagen 4 Generate {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/imagen-4.0-generate-001` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced text-to-image model delivering a balance of speed and image quality. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/imagen-4.0-generate-001"]},"prompt":{"type":"string","maxLength":400,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."},"num_images":{"type":"integer","maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"aspect_ratio":{"type":"string","enum":["1:1","9:16","16:9","3:4","4:3"],"default":"1:1","description":"The aspect ratio of the generated image."},"person_generation":{"type":"string","enum":["dont_allow","allow_adult"],"default":"allow_adult","description":"Allow generation of people."},"safety_setting":{"type":"string","enum":["block_low_and_above","block_medium_and_above","block_only_high"],"default":"block_medium_and_above","description":"Adds a filter level to safety filtering."},"add_watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"google/imagen-4.0-generate-001"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified aspect ratio using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "Racoon eating ice-cream", "model": "google/imagen-4.0-generate-001", "aspect_ratio": "16:9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/imagen-4.0-generate-001', prompt: 'Racoon eating ice-cream', aspect_ratio: '16:9' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "mime_type": "image/png", "url": "https://cdn.aimlapi.com/generations/guepard/1758236160134-5fcac25f-0c87-4145-b24b-98ccebab5c0c.png", "prompt": "A mischievous racoon, with beady eyes and a striped tail, is caught mid-lick, enjoying a stolen ice cream cone. Its small paws cradle the melting treat, and its face is smeared with the creamy sweetness, indicating a thorough and enthusiastic indulgence. The scene is set in a cluttered alleyway, with discarded boxes and old bricks forming a backdrop to the racoon's illicit feast." } ], "meta": { "usage": { "tokens_used": 84000 } } } ``` {% endcode %}
So we obtained the following 1408x768 image by running this code example:

In reality, raccoons shouldn’t be given ice cream or chocolate—it’s harmful to their metabolism.
But in the AI world, raccoons party like there’s no tomorrow.

--- # Source: https://docs.aimlapi.com/api-references/image-models/google/imagen-4-preview.md # Imagen 4 Preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/imagen4/preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Google’s highest quality image generation model as of May 2025. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/imagen4/preview"]},"prompt":{"type":"string","maxLength":400,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."},"num_images":{"type":"integer","maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"aspect_ratio":{"type":"string","enum":["1:1","9:16","16:9","3:4","4:3"],"default":"1:1","description":"The aspect ratio of the generated image."},"person_generation":{"type":"string","enum":["dont_allow","allow_adult"],"default":"allow_adult","description":"Allow generation of people."},"safety_setting":{"type":"string","enum":["block_low_and_above","block_medium_and_above","block_only_high"],"default":"block_medium_and_above","description":"Adds a filter level to safety filtering."},"add_watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"google/imagen4/preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified aspect ratio using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "Racoon eating ice-cream", "model": "google/imagen4/preview", "aspect_ratio": "16:9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/imagen4/preview', prompt: 'Racoon eating ice-cream', aspect_ratio: '16:9' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/eagle/files/panda/tI_UTxAzqLqWZZqSoNqsO_output.png', content_type: 'image/png', file_name: 'output.png', file_size: 1665805 } ], seed: 3360388064 } ``` {% endcode %}
So we obtained the following 1408x768 image by running this code example:

In reality, raccoons shouldn’t be given ice cream or chocolate—it’s harmful to their metabolism.
But in the AI world, raccoons party like there’s no tomorrow.

--- # Source: https://docs.aimlapi.com/api-references/image-models/google/imagen-4-ultra-generate.md # Imagen 4 Ultra Generate {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/imagen-4.0-ultra-generate-001` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A model built for photorealistic image generation and precise text rendering, suited for high-fidelity professional use. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/imagen-4.0-ultra-generate-001"]},"prompt":{"type":"string","maxLength":400,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."},"num_images":{"type":"integer","maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"aspect_ratio":{"type":"string","enum":["1:1","9:16","16:9","3:4","4:3"],"default":"1:1","description":"The aspect ratio of the generated image."},"person_generation":{"type":"string","enum":["dont_allow","allow_adult"],"default":"allow_adult","description":"Allow generation of people."},"safety_setting":{"type":"string","enum":["block_low_and_above","block_medium_and_above","block_only_high"],"default":"block_medium_and_above","description":"Adds a filter level to safety filtering."},"add_watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"google/imagen-4.0-ultra-generate-001"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified aspect ratio using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "Racoon eating ice-cream", "model": "google/imagen-4.0-ultra-generate-001", "aspect_ratio": "16:9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'google/imagen-4.0-ultra-generate-001', prompt: 'Racoon eating ice-cream', aspect_ratio: '16:9' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "mime_type": "image/png", "url": "https://cdn.aimlapi.com/generations/guepard/1758237214798-b22381b0-cc4c-466c-92f4-c3b2531d7ebf.png", "prompt": "A curious raccoon sitting upright on a park bench, intently focused on licking a melting scoop of vanilla ice cream in a waffle cone. The raccoon has its small paws wrapped around the cone, and a tiny bit of ice cream is smeared on its nose and whiskers. The fur is ruffled and slightly damp from the treat. The park setting is sunny with dappled light filtering through the leaves of a large oak tree in the background. Autumn leaves are scattered on the ground near the bench. The ice cream is dripping slightly down the cone, and a small puddle is forming on the wooden bench. The image is captured at eye level with the raccoon." } ], "meta": { "usage": { "tokens_used": 126000 } } } ``` {% endcode %}
So we obtained the following 1408x768 image by running this code example:

In reality, raccoons shouldn’t be given ice cream or chocolate—it’s harmful to their metabolism.
But in the AI world, raccoons party like there’s no tomorrow.

--- # Source: https://docs.aimlapi.com/api-references/image-models/google/imagen-4-ultra.md # Imagen 4 Ultra Preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `imagen-4.0-ultra-generate-preview-06-06` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Google’s highest quality image generation model as of July 2025. Supports automatic AI prompt enhancement and pre-moderation of generated content. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["imagen-4.0-ultra-generate-preview-06-06"]},"prompt":{"type":"string","maxLength":400,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."},"num_images":{"type":"integer","maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":0,"maximum":4294967295,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"aspect_ratio":{"type":"string","enum":["1:1","9:16","16:9","3:4","4:3"],"default":"1:1","description":"The aspect ratio of the generated image."},"person_generation":{"type":"string","enum":["dont_allow","allow_adult"],"default":"allow_adult","description":"Allow generation of people."},"safety_setting":{"type":"string","enum":["block_low_and_above","block_medium_and_above","block_only_high"],"default":"block_medium_and_above","description":"Adds a filter level to safety filtering."},"add_watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"imagen-4.0-ultra-generate-preview-06-06"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified aspect ratio using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "Racoon eating ice-cream", "model": "imagen-4.0-ultra-generate-preview-06-06", "aspect_ratio": "16:9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'imagen-4.0-ultra-generate-preview-06-06', prompt: 'Racoon eating ice-cream', aspect_ratio: '16:9' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { data: [ { mime_type: 'image/png', url: 'https://cdn.aimlapi.com/generations/guepard/1756971509123-7ed4055c-1878-47c5-a060-7202392b78a2.png', prompt: "A curious raccoon is sitting upright on a weathered wooden picnic table, intensely focused on eating a melting ice cream cone. The raccoon holds the cone delicately in its paws, with sticky ice cream smeared around its mouth and on its fur. The ice cream is a vibrant strawberry pink color, dripping down the cone onto the table surface. Its mask-like facial markings are prominent, and its dark eyes are wide with concentration. The setting is a lush green park during golden hour, with soft, warm sunlight filtering through the background trees, creating a gentle bokeh effect. Empty picnic benches are visible in the soft-focus background. The wooden table is slightly worn, with visible grain and a few scattered leaves. The lighting is natural and warm, highlighting the raccoon's fur and the glistening ice cream." } ] } ``` {% endcode %}
So we obtained the following 1408x768 image by running this code example:

In reality, raccoons shouldn’t be given ice cream or chocolate—it’s harmful to their metabolism.
But in the AI world, raccoons party like there’s no tomorrow.

--- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/inworld.md # Inworld - [inworld/tts-1](/api-references/speech-models/text-to-speech/inworld/tts-1.md) - [inworld/tts-1-max](/api-references/speech-models/text-to-speech/inworld/tts-1-max.md) --- # Source: https://docs.aimlapi.com/api-references/video-models/sber-ai/kandinsky5-distill-text-to-video.md # Kandinsky 5 Distill (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `sber-ai/kandinsky5-distill-t2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A diffusion model designed for fast text-to-video generation (no sound), offered as a compact variant of [the Kandinsky 5 (Text-to-Video)](https://docs.aimlapi.com/api-references/video-models/sber-ai/kandinsky5-text-to-video) model. A resolution is slightly above standard definition (SD). ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["sber-ai/kandinsky5-distill-t2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["3:2","1:1","2:3"],"default":"3:2","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"num_inference_steps":{"type":"integer","default":30,"description":"Number of inference steps for sampling. Higher values give better quality but take longer."}},"required":["model","prompt"],"title":"sber-ai/kandinsky5-distill-t2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"get":{"operationId":"VideoControllerV2_pollVideo_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":"Successfully generated video","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Video.v2.PollVideoResponseDTO"}}}}},"tags":["Video Models"]}}},"components":{"schemas":{"Video.v2.PollVideoResponseDTO":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."},"duration":{"type":"number","nullable":true,"description":"The duration of the video."}},"required":["url"]},"duration":{"type":"number","nullable":true,"description":"The duration of the video."},"error":{"nullable":true,"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "sber-ai/kandinsky5-distill-t2v", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", "duration": 5 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "sber-ai/kandinsky5-distill-t2v", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. `, }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 0a3ca8ba-9af6-41d3-a938-41f762fcedc1:sber-ai/kandinsky5-distill-t2v Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: { id: '0a3ca8ba-9af6-41d3-a938-41f762fcedc1:sber-ai/kandinsky5-distill-t2v', status: 'completed', video: { url: 'https://cdn.aimlapi.com/flamingo/files/b/koala/yHNhY22wNAnpCbSqIaV8D_output.mp4' } } ``` {% endcode %}
**Processing time**: 53.6 sec. **Generated Video** (768x512, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/sber-ai/kandinsky5-text-to-video.md # Kandinsky 5 (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `sber-ai/kandinsky5-t2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A diffusion model designed for text-to-video generation with a resolution slightly above standard definition (SD). ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["sber-ai/kandinsky5-t2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["3:2","1:1","2:3"],"default":"3:2","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"num_inference_steps":{"type":"integer","default":30,"description":"Number of inference steps for sampling. Higher values give better quality but take longer."}},"required":["model","prompt"],"title":"sber-ai/kandinsky5-t2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/generate/video/pixverse/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "sber-ai/kandinsky5-distill-t2v", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", "resolution": "1080p", "duration": 5 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/generate/video/pixverse/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "sber-ai/kandinsky5-t2v", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. `, // duration: 5, }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '1fe4344e-3d44-4bf8-9f04-0ac4bb312eec:pixverse/v5/text-to-video', 'status': 'queued', 'meta': {'usage': {'tokens_used': 840000}}} Generation ID: 1fe4344e-3d44-4bf8-9f04-0ac4bb312eec:pixverse/v5/text-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '1fe4344e-3d44-4bf8-9f04-0ac4bb312eec:pixverse/v5/text-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/penguin/xK3kbIC5S0pR_oEU4Uw1Q_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 6274330}} ``` {% endcode %}
**Processing time**: \~2 min 28 sec. **Generated Video** (768x512, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/integrations/kilo-code.md # Kilo Code ## About [Kilo Code](https://kilocode.ai/) is an open-source AI coding assistant and VS Code extension that enables natural-language code generation, debugging, and refactoring through customizable modes (Architect, Code, Debug, etc.). It supports multiple model providers, integrates with the Model Context Protocol (MCP), and allows developers to extend functionality with custom tools and workflows. This guide shows how to connect **AI/ML API** as a **custom provider** in **Kilo Code** for VS Code, using the **OpenAI-compatible** path.\ Follow the steps and screenshots below. *** ## Summary * **Provider:** OpenAI Compatible (inside Kilo Code) * **Base URL:** `https://api.aimlapi.com/v1` * **API Key:** from your AI/ML API dashboard * **Recommended Model IDs:** [openai/gpt-5-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-chat), [openai/o4-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini), [openai/gpt-4.1](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1) ( or any other supported model) *** ## 0) Install Kilo Code Extension If you haven’t installed Kilo Code yet: 1. Open VS Code. 2. Go to Extensions (Ctrl+Shift+X or Cmd+Shift+X). 3. Search for “Kilo Code”. 4. Install the extension by Kilo Code. 5. Reload VS Code if prompted. After installation, you’ll see the Kilo Code icon in the sidebar. Or you can install it from the official site: [**kilocode.ai**](https://kilocode.ai/) *** ## 1) Open Kilo Code → “Use your own API key” From the Kilo Code welcome screen, click **Use your own API key**.
*** ## 2) Choose Provider: **OpenAI Compatible** Open the provider dropdown and select **OpenAI Compatible**.
{% hint style="info" %} Tip: Kilo Code also lists many other providers. For AI/ML API use the **OpenAI Compatible** option. {% endhint %} *** ## 3) Configure AI/ML API Settings Fill the form as follows: * **Base URL** ``` https://api.aimlapi.com/v1 ``` * **API Key**\ Paste your key from [**https://aimlapi.com/app/keys**](https://aimlapi.com/app/keys) * **Model** (examples) * [openai/gpt-5-chat-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-chat) ← recommended universal chat * [openai/gpt-5-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5) ← pinned dated release * [openai/o4-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini) ← fast, low-cost * [openai/gpt-4.1](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1) ← stable classic * *any other supported model by your account* * **Use Azure**: `OFF` * **Set Azure API version**: leave disabled * **Image Support**: `ON` if you plan to send images (e.g., 4o / o4-mini) * **Max Output Tokens**: `-1` (let server decide) * **Context Window Size**: up to `128000` (adjust as needed)
{% hint style="info" %} **Note**: If you have custom headers (e.g., for proxies), add them in the **Custom Headers** field. {% endhint %} *** ## 4) Run Your First Task Open the Kilo Code panel, start a task (Ask/Code/Debug), and send a short test message, for example: ``` Hi from AI/ML API! ```
You should see a successful response with tokens/usage bars as in the screenshot. *** ## 🔬 Quick API Sanity Check (optional) You can also sanity-check your key via `curl`: {% code overflow="wrap" %} ```bash curl -X POST https://api.aimlapi.com/v1/chat/completions -H "Authorization: Bearer $AIMLAPI_KEY" -H "Content-Type: application/json" -d '{ "model": "openai/gpt-5-chat-latest", "messages": [ {"role":"system","content":"You are a concise assistant."}, {"role":"user","content":"Say hello in one sentence."} ] }' ``` {% endcode %} If the request succeeds, you’re ready to use the same model inside Kilo Code. *** ## 💡 Tips * **Profiles**: Create multiple *API Configuration Profiles* (e.g., default = `openai/gpt-5-chat-latest`, heavy = `openai/gpt-5-2025-08-07`, budget = `openai/o4-mini`). Switch per task. * **Images**: For vision tasks, keep **Image Support** enabled and use a vision-capable model. * **Token Limits**: Large responses may require raising *Max Output Tokens* or splitting the task. * **Headers**: If you need custom headers, add them in **Custom Headers**. *** ## 🧰 Troubleshooting * **401 / Unauthorized**: Re-check your API key and that it’s pasted without spaces. Regenerate if needed. * **404 / Model not found**: Verify the **exact Model ID** you selected is available to your account. * **No response / Network issues**: Corporate VPN/Proxy may block `api.aimlapi.com`. Whitelist the domain. * **Azure mode confusion**: Leave **Use Azure** toggled **off** unless you specifically need Azure routes. *** ## 📚 Helpful Links * **AI/ML API Keys**: * **AI/ML API Dashboard**: * **Kilo Code Docs**: *** Enjoy coding with **Kilo Code + AI/ML API**! 🚀 --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-preview.md # kimi-k2-preview

This documentation is valid for the following list of our models:

  • moonshot/kimi-k2-preview
  • moonshot/kimi-k2-0905-preview
Try in Playground
Try in Playground
## Model Overview `moonshot/kimi-k2-preview` (July 2025) is a mixture-of-experts model with strong reasoning, coding, and agentic capabilities. `moonshot/kimi-k2-0905-preview` (September 2025) is an upgraded version with improved grounding, better instruction following, and a stronger focus on coding and agentic tasks. The memory has doubled from 128k to a decent 256k tokens. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example-1-chat-completion) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["moonshot/kimi-k2-preview","moonshot/kimi-k2-0905-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function","builtin_function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"anyOf":[{"type":"string","enum":["$web_search"]},{"type":"string"}],"description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."},"required":{"type":"array","items":{"type":"string"}}},"required":["name"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"moonshot/kimi-k2-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example #1: Chat Completion {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"moonshot/kimi-k2-0905-preview", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'moonshot/kimi-k2-0905-preview', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-6908c55b7589dac387b2bd3b", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How can I help you today?" } } ], "created": 1762182491, "model": "kimi-k2-0905-preview", "usage": { "prompt_tokens": 3, "completion_tokens": 53, "total_tokens": 56 } } ``` {% endcode %}
## Code Example #2: Web Search {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import json import requests from typing import Dict, Any # Insert your AIML API Key instead of : API_KEY = "" BASE_URL = "https://api.aimlapi.com/v1" HEADERS = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json", } def search_impl(arguments: Dict[str, Any]) -> Any: return arguments def chat(messages): url = f"{BASE_URL}/chat/completions" payload = { "model": "moonshot/kimi-k2-0905-preview", "messages": messages, "temperature": 0.6, "tools": [ { "type": "builtin_function", "function": {"name": "$web_search"}, } ] } response = requests.post(url, headers=HEADERS, json=payload) response.raise_for_status() return response.json()["choices"][0] def main(): messages = [ {"role": "system", "content": "You are Kimi."}, {"role": "user", "content": "Please search for Moonshot AI Context Caching technology and tell me what it is in English."} ] finish_reason = None while finish_reason is None or finish_reason == "tool_calls": choice = chat(messages) finish_reason = choice["finish_reason"] message = choice["message"] if finish_reason == "tool_calls": messages.append(message) for tool_call in message["tool_calls"]: tool_call_name = tool_call["function"]["name"] tool_call_arguments = json.loads(tool_call["function"]["arguments"]) if tool_call_name == "$web_search": tool_result = search_impl(tool_call_arguments) else: tool_result = f"Error: unable to find tool by name '{tool_call_name}'" messages.append({ "role": "tool", "tool_call_id": tool_call["id"], "name": tool_call_name, "content": json.dumps(tool_result), }) print(message["content"]) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` Moonshot AI’s “Context Caching” is a data-management layer for the Kimi large-language-model API. What it does 1. You upload or define a large, static context once (for example a 100-page product manual, a legal contract, or a code base). 2. The platform stores this context in a fast-access cache and gives it a tag/ID. 3. In every subsequent call you only send the new user question; the system re-uses the cached context instead of transmitting and re-processing the whole document each time. 4. When the cache TTL expires it is deleted automatically; you can also refresh or invalidate it explicitly. Benefits - Up to 90 % lower token consumption (you pay only for the incremental prompt and the new response). - 83 % shorter time-to-first-token latency, because the heavy prefill phase is skipped on every reuse. - API price stays the same; savings come from not re-sending the same long context. Typical use cases - Customer-support bots that answer many questions against the same knowledge base. - Repeated analysis of a static code repository. - High-traffic AI applications that repeatedly query the same large document set. Billing (during public beta) - Cache creation: 24 CNY per million tokens cached. - Storage: 10 CNY per million tokens per minute. - Cache hit: 0.02 CNY per successful call that re-uses the cache. In short, Context Caching lets developers treat very long, seldom-changing context as a reusable asset, cutting both cost and latency for repeated queries. ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-turbo-preview.md # kimi-k2-turbo-preview

This documentation is valid for the following model:

  • moonshot/kimi-k2-turbo-preview
Try in Playground
## Model Overview The high-speed version of [Kimi K2](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-preview). A model fine-tuned for agentic tasks, coding, and conversational use, featuring a context window of up to 256,000 tokens and fast generation speeds — ideal for handling long documents and real-time interactions. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example-1-chat-completion) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["moonshot/kimi-k2-turbo-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function","builtin_function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"anyOf":[{"type":"string","enum":["$web_search"]},{"type":"string"}],"description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."},"required":{"type":"array","items":{"type":"string"}}},"required":["name"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"moonshot/kimi-k2-turbo-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example #1: Chat Completion {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"moonshot/kimi-k2-turbo-preview", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'moonshot/kimi-k2-turbo-preview', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-690895f53d8b644f83fe679e", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hi there! How can I help you today?" } } ], "created": 1762170357, "model": "kimi-k2-turbo-preview", "usage": { "prompt_tokens": 10, "completion_tokens": 231, "total_tokens": 241 } } ``` {% endcode %}
## Code Example #2: Web Search {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import json import requests from typing import Dict, Any # Insert your AIML API Key instead of : API_KEY = "" BASE_URL = "https://api.aimlapi.com/v1" HEADERS = { "Authorization": f"Bearer {API_KEY}", "Content-Type": "application/json", } def search_impl(arguments: Dict[str, Any]) -> Any: return arguments def chat(messages): url = f"{BASE_URL}/chat/completions" payload = { "model": "moonshot/kimi-k2-turbo-preview", "messages": messages, "temperature": 0.6, "tools": [ { "type": "builtin_function", "function": {"name": "$web_search"}, } ] } response = requests.post(url, headers=HEADERS, json=payload) response.raise_for_status() return response.json()["choices"][0] def main(): messages = [ {"role": "system", "content": "You are Kimi."}, {"role": "user", "content": "Please search for Moonshot AI Context Caching technology and tell me what it is in English."} ] finish_reason = None while finish_reason is None or finish_reason == "tool_calls": choice = chat(messages) finish_reason = choice["finish_reason"] message = choice["message"] if finish_reason == "tool_calls": messages.append(message) for tool_call in message["tool_calls"]: tool_call_name = tool_call["function"]["name"] tool_call_arguments = json.loads(tool_call["function"]["arguments"]) if tool_call_name == "$web_search": tool_result = search_impl(tool_call_arguments) else: tool_result = f"Error: unable to find tool by name '{tool_call_name}'" messages.append({ "role": "tool", "tool_call_id": tool_call["id"], "name": tool_call_name, "content": json.dumps(tool_result), }) print(message["content"]) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` Moonshot AI’s “Context Caching” is a **prompt-cache** layer for the Kimi large-language-model API. It lets you upload long, static text (documents, system prompts, few-shot examples, code bases, etc.) once, store the resulting key-value (KV) tensors in Moonshot’s servers, and then re-use that cached prefix in as many later requests as you want. Because the heavy “prefill” computation is already done, subsequent calls that start with the same context: - Skip re-processing the cached tokens - Return the first token up to **83 % faster** - Cost up to **90 % less input-token money** (you pay only a small cache-storage and cache-hit fee instead of the full per-token price every time) Typical use-cases are FAQ bots that always read the same manual, repeated analysis of a static repo, or any agent that keeps a long instruction set in every turn. You create a cache object with a TTL (time-to-live), pay a one-time creation charge plus a per-minute storage fee, and then pay a tiny fee each time an incoming request “hits” the cache. ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai.md # Source: https://docs.aimlapi.com/api-references/image-models/kling-ai.md # Kling AI - [image-o1](/api-references/image-models/kling-ai/image-o1.md) --- # Source: https://docs.aimlapi.com/api-references/video-models/krea/krea-wan-14b-text-to-video.md # krea-wan-14b/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `krea/krea-wan-14b/text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A 14-billion parameter model for text-to-video generation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. Note that in this model, the video duration is defined by the number of frames, not seconds. You can calculate the duration in seconds based on the frame rate of 16 frames per second. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["krea/krea-wan-14b/text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"num_frames":{"type":"integer","minimum":18,"maximum":162,"default":78,"description":"Number of frames to generate. Must be a multiple of 12 plus 6, for example 18, 30, 42, etc."},"enable_prompt_expansion":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."}},"required":["model","prompt"],"title":"krea/krea-wan-14b/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"get":{"operationId":"VideoControllerV2_pollVideo_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":"Successfully generated video","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Video.v2.PollVideoResponseDTO"}}}}},"tags":["Video Models"]}}},"components":{"schemas":{"Video.v2.PollVideoResponseDTO":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."},"duration":{"type":"number","nullable":true,"description":"The duration of the video."}},"required":["url"]},"duration":{"type":"number","nullable":true,"description":"The duration of the video."},"error":{"nullable":true,"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "krea/krea-wan-14b/text-to-video", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", "num_frames": 90 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "krea/krea-wan-14b/text-to-video", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. `, num_frames: 90, }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '67b69e0c-922f-4eae-9b90-8c4605131d3b:krea/krea-wan-14b/text-to-video', 'status': 'queued', 'meta': {'usage': {'tokens_used': 315000}}} Generation ID: 67b69e0c-922f-4eae-9b90-8c4605131d3b:krea/krea-wan-14b/text-to-video Status: queued Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '67b69e0c-922f-4eae-9b90-8c4605131d3b:krea/krea-wan-14b/text-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/zebra/GR3ehIPWwREVvJduWH6Pn_R07gifMb.mp4'}} ``` {% endcode %}
**Processing time**: 11.6 sec. **Generated Video** (832x480, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/krea/krea-wan-14b-video-to-video.md # krea-wan-14b/video-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `krea/krea-wan-14b/video-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A 14-billion parameter model for video edition. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["krea/krea-wan-14b/video-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"strength":{"type":"number","minimum":0,"maximum":1,"default":0.85,"description":"Denoising strength for the video-to-video generation. 0.0 preserves the original, 1.0 completely remakes the video."},"enable_prompt_expansion":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."}},"required":["model","prompt","video_url"],"title":"krea/krea-wan-14b/video-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"get":{"operationId":"VideoControllerV2_pollVideo_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":"Successfully generated video","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Video.v2.PollVideoResponseDTO"}}}}},"tags":["Video Models"]}}},"components":{"schemas":{"Video.v2.PollVideoResponseDTO":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."},"duration":{"type":"number","nullable":true,"description":"The duration of the video."}},"required":["url"]},"duration":{"type":"number","nullable":true,"description":"The duration of the video."},"error":{"nullable":true,"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "krea/krea-wan-14b/video-to-video", "video_url":"https://zovi0.github.io/public_misc/kling-v2-master-t2v-racoon.mp4", "prompt":''' Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots. ''', "strength": 0.55, } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "krea/krea-wan-14b/video-to-video", prompt: ` Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots. `, video_url:'https://zovi0.github.io/public_misc/kling-v2-master-t2v-racoon.mp4', strength: 0.55 }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '3637be63-2db8-4ecc-bc0f-00afbdf10a55:krea/krea-wan-14b/video-to-video', 'status': 'queued', 'meta': {'usage': {'tokens_used': 315000}}} Generation ID: 3637be63-2db8-4ecc-bc0f-00afbdf10a55:krea/krea-wan-14b/video-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '3637be63-2db8-4ecc-bc0f-00afbdf10a55:krea/krea-wan-14b/video-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/panda/W9bYaX0QBdz6mzlOcKVRo_mF899FFI.mp4'}} ``` {% endcode %}
**Processing time**: 11.4 sec. **Generated Video** (832x480, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/krea.md # Krea - [krea-wan-14b/text-to-video](/api-references/video-models/krea/krea-wan-14b-text-to-video.md) - [krea-wan-14b/video-to-video](/api-references/video-models/krea/krea-wan-14b-video-to-video.md) --- # Source: https://docs.aimlapi.com/integrations/langflow.md # Langflow ## About [Langflow](https://www.langflow.org/) is a new visual framework for building multi-agent and RAG applications. It is open-source, Python-powered, fully customizable, and LLM and vector store agnostic. Its intuitive interface allows for easy manipulation of AI building blocks, enabling developers to quickly prototype and turn their ideas into powerful, real-world solutions. ## How to Use AIML API via Langflow A user of the Langflow framework can create a working AI pipeline (a *flow*) by simply adding visual components, connecting their inputs and outputs in the required order, and setting various available parameters in each component. At the center of such a flow is one or more sequential **model components** that generate text using LLMs. Choose **Models > AIML** in the sidebar, then click the "+" button or simply drag and drop the model element onto the work area. After that, connect it to your input and output elements:

The component at the center creates a chat model instance powered by AIML API.

### Inputs {% hint style="info" %} Only 1-2 key parameters are usually displayed directly on the model component body. To configure most parameters, you need to click on the model element and then click the **Controls** button that appears above the model component. {% endhint %} | Name | Type | Description | | --------------- | ------------ | ----------------------------------------------------------------------------------------- | | `max_tokens` | Integer | The maximum number of tokens to generate. Set to 0 for unlimited tokens. Range: 0-128000. | | `model_kwargs` | Dictionary | Additional keyword arguments for the model. | | `model_name` | String | The name of the AIML model to use. Options are predefined in `AIML_CHAT_MODELS`. | | `aiml_api_base` | String | The base URL of the AIML API. Defaults to `https://api.aimlapi.com`. | | `api_key` | SecretString | The AIML API Key to use for the model. | | `temperature` | Float | Controls randomness in the output. Default: `0.1`. | | `seed` | Integer | Controls reproducibility of the job. | ### Outputs | Name | Type | Description | | ------- | ------------- | ------------------------------------------------------------------- | | `text` | String | Chat model response. | | `model` | LanguageModel | An instance of ChatOpenAI configured with the specified parameters. | For further information about the framework, please check the [official Langflow documentation.](https://docs.langflow.org/) --- # Source: https://docs.aimlapi.com/api-references/video-models/pixverse/lip-sync.md # lip-sync {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `pixverse/lip-sync` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model generates videos with synchronized audio. For lip-sync input, you may either supply text with a predefined voice, or pass a URL to an external audio file containing speech. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference video and a prompt. This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["pixverse/lip-sync"]},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"audio_url":{"type":"string","format":"uri","description":"A direct link to an online audio file or a Base64-encoded local to an audio file used for lip-syncing in the video. Use either audio_url or (lip_sync_tts_speaker together with lip_sync_tts_content), but not both."},"lip_sync_tts_content":{"type":"string","description":"The text content to be lip-synced in the video. Use either audio_url or (lip_sync_tts_speaker together with lip_sync_tts_content), but not both."},"lip_sync_tts_speaker":{"type":"string","enum":["Harper","Ava","Isabella","Sophia","Emily","Chloe","Julia","Mason","Jack","Liam","James","Oliver","Adrian","Ethan","Auto"],"description":"A predefined system voice used for generating speech in the video. Use either audio_url or (lip_sync_tts_speaker together with lip_sync_tts_content), but not both."}},"required":["model","video_url"],"title":"pixverse/lip-sync"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/lip-sync", "video_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4", "lip_sync_tts_content": "Through the forest, past the roots — I’ve got to get there fast! No time to stop!", "lip_sync_tts_speaker": "Oliver" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ''; // Creating and sending a video generation task to the server async function generateVideo() { const url = 'https://api.aimlapi.com/v2/video/generations'; const data = { model: 'pixverse/lip-sync', video_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4', lip_sync_tts_content: 'Through the forest, past the roots — I’ve got to get there fast! No time to stop!', lip_sync_tts_speaker: 'Oliver' }; try { const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error('Request failed:', error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL('https://api.aimlapi.com/v2/video/generations'); url.searchParams.append('generation_id', genId); try { const response = await fetch(url, { method: 'GET', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, }); return await response.json(); } catch (error) { console.error('Error fetching video:', error); return null; } } // Initiates video generation and checks the status every 15 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = async () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); await new Promise(resolve => setTimeout(resolve, interval)); return checkStatus(); } else { console.log("Processing complete:\n", responseData); } }; await checkStatus(); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Statuses
StatusDescription
queuedJob is waiting in queue
generatingVideo is being generated
completedGeneration successful, video available
errorGeneration failed, check error field
Response {% code overflow="wrap" %} ```json5 {'id': 'z-IMFJo2ORUJBR7OWIqDM', 'status': 'queued', 'meta': {'usage': {'credits_used': 2000000}}} Generation ID: z-IMFJo2ORUJBR7OWIqDM Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'z-IMFJo2ORUJBR7OWIqDM', 'status': 'succeeded', 'video': {'url': 'https://cdn.aimlapi.com/panda/pixverse%2Fmp4%2Fmedia%2Fweb%2Fori%2FPUQnIoDi49sKyQ48-vAtQ_seed0.mp4'}} ``` {% endcode %}
**Processing time**: \~34 sec. **Generated video** (1280x720, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/integrations/litellm.md # LiteLLM ## About [LiteLLM](https://www.litellm.ai/) is an open-source Python library that provides a unified API for interacting with multiple large language model providers. It allows developers to switch between different models with minimal code changes, optimizing cost and performance. LiteLLM simplifies integration by offering a single interface for various LLM endpoints, enabling seamless experimentation and deployment across different AI providers. If you use this library, you can also call models from AI/ML API through it. Below are the most common use cases: * [Chat completion](#chat-completion) * [Streaming](#streaming) * [Chat completion (asynchronous)](#async-completion) * [Streaming (asynchronous)](#async-streaming) * [Embedding (asynchronous)](#async-embedding) * [Image Generation (asynchronous)](#async-image-generation) ## Installation Install the library with the standard pip tool in terminal: ```sh pip install litellm ``` ## Making API Calls You can choose from LLama, Qwen, Flux, and 200+ other models on the [AI/ML API official website](https://aimlapi.com/models). ### Chat completion {% code overflow="wrap" %} ```python import litellm response = litellm.completion( # The model name must include prefix "openai/" + the model id from AI/ML API: model="openai/meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo", # your AI/ML API api-key: api_key="", api_base="https://api.aimlapi.com/v2", messages=[ { "role": "user", "content": "Hey, how's it going?", } ], ) ``` {% endcode %} ### Streaming {% code overflow="wrap" %} ```python import litellm response = litellm.completion( # The model name must include prefix "openai/" + the model id from AI/ML API: model="openai/Qwen/Qwen2-72B-Instruct", # your AI/ML API api-key api_key="", api_base="https://api.aimlapi.com/v2", messages=[ { "role": "user", "content": "Hey, how's it going?", } ], stream=True, ) for chunk in response: print(chunk) ``` {% endcode %} ### Async Completion {% code overflow="wrap" %} ```python import asyncio import litellm async def main(): response = await litellm.acompletion( # The model name must include prefix "openai/" + the model id from AI/ML API: model="openai/anthropic/claude-3-5-haiku", # your AI/ML API api-key api_key="", api_base="https://api.aimlapi.com/v2", messages=[ { "role": "user", "content": "Hey, how's it going?", } ], ) print(response) if __name__ == "__main__": asyncio.run(main()) ``` {% endcode %} ### Async Streaming {% code overflow="wrap" %} ```python import asyncio import traceback import litellm async def main(): try: print("test acompletion + streaming") response = await litellm.acompletion( # The model name must include prefix "openai/" + model id from AI/ML API: model="openai/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF", # your AI/ML API api-key api_key="", api_base="https://api.aimlapi.com/v2", messages=[{"content": "Hey, how's it going?", "role": "user"}], stream=True, ) print(f"response: {response}") async for chunk in response: print(chunk) except: print(f"error occurred: {traceback.format_exc()}") pass if __name__ == "__main__": asyncio.run(main()) ``` {% endcode %} ### Async Embedding {% code overflow="wrap" %} ```python import asyncio import litellm async def main(): response = await litellm.aembedding( # The model name must include prefix "openai/" + model id from AI/ML API: model="openai/text-embedding-3-small", # your AI/ML API api-key api_key="", api_base="https://api.aimlapi.com/v1", # 👈 the URL has changed from v2 to v1 input="Your text string", ) print(response) if __name__ == "__main__": asyncio.run(main()) ``` {% endcode %} ### Async Image Generation {% code overflow="wrap" %} ```python import asyncio import litellm async def main(): response = await litellm.aimage_generation( # The model name must include prefix "openai/" + model id from AI/ML API: model="openai/dall-e-3", # your AI/ML API api-key api_key="", api_base="https://api.aimlapi.com/v1", # 👈 the URL has changed from v2 to v1 prompt="A cute baby sea otter", ) print(response) if __name__ == "__main__": asyncio.run(main()) ``` {% endcode %} --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3-chat-hf.md # Llama-3-chat-hf

This documentation is valid for the following list of our models:

  • meta-llama/Llama-3-70b-chat-hf
Try in Playground
## Model Overview This model is optimized for dialogue use cases and outperform many existing open-source chat models on common industry benchmarks. You can also view [a detailed comparison of this model](https://aimlapi.com/comparisons/qwen-2-vs-llama-3-comparison) on our main website. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/Llama-3-70b-chat-hf"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/Llama-3-70b-chat-hf"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/Llama-3-70b-chat-hf", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/Llama-3-70b-chat-hf', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npQoMP3-4yUbBN-92dab967fbdeb248', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?", 'tool_calls': []}}], 'created': 1744209255, 'model': 'meta-llama/Llama-3-70b-chat-hf', 'usage': {'prompt_tokens': 20, 'completion_tokens': 48, 'total_tokens': 68}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/nvidia/llama-3.1-nemotron-70b-1.md # nemotron-nano-12b-v2-vl

This documentation is valid for the following list of our models:

  • nvidia/nemotron-nano-12b-v2-vl
Try in Playground
## Model Overview The model offers strong document understanding and summarization capabilities. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["nvidia/nemotron-nano-12b-v2-vl"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"reasoning":{"type":"object","properties":{"effort":{"type":"string","enum":["low","medium","high"],"description":"Reasoning effort setting"},"max_tokens":{"type":"integer","minimum":1,"description":"Max tokens of reasoning content. Cannot be used simultaneously with effort."},"exclude":{"type":"boolean","description":"Whether to exclude reasoning from the response"}},"description":"Configuration for model reasoning/thinking tokens"},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"nvidia/nemotron-nano-12b-v2-vl"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"nvidia/nemotron-nano-12b-v2-vl", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'nvidia/nemotron-nano-12b-v2-vl', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1762343744-rdCcOL8byCQwRBZ8QCkv", "provider": "DeepInfra", "model": "nvidia/nemotron-nano-12b-v2-vl", "object": "chat.completion", "created": 1762343744, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "stop", "index": 0, "message": { "role": "assistant", "content": "\n\nHello! How can I assist you today?\n", "refusal": null, "reasoning": "Okay, the user said \"Hello\". Let me start by greeting them back in a friendly and welcoming way. I should keep it simple and approachable, maybe something like \"Hello! How can I assist you today?\" That should work. I want to make sure they feel comfortable and open to asking for help. Let me check if there's anything else I need to add. No, keeping it straightforward is best here. Ready to respond.\n", "reasoning_details": [ { "type": "reasoning.text", "text": "Okay, the user said \"Hello\". Let me start by greeting them back in a friendly and welcoming way. I should keep it simple and approachable, maybe something like \"Hello! How can I assist you today?\" That should work. I want to make sure they feel comfortable and open to asking for help. Let me check if there's anything else I need to add. No, keeping it straightforward is best here. Ready to respond.\n", "format": "unknown", "index": 0 } ] } } ], "usage": { "prompt_tokens": 14, "completion_tokens": 102, "total_tokens": 116, "prompt_tokens_details": null } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/nvidia/llama-3.1-nemotron-70b.md # llama-3.1-nemotron-70b

This documentation is valid for the following list of our models:

  • nvidia/llama-3.1-nemotron-70b-instruct
Try in Playground
## Model Overview A sophisticated LLM, designed to enhance the performance of instruction-following tasks. It utilizes advanced training techniques and a robust architecture to generate human-like responses across a variety of applications. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["nvidia/llama-3.1-nemotron-70b-instruct"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"nvidia/llama-3.1-nemotron-70b-instruct"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"nvidia/llama-3.1-nemotron-70b-instruct", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'nvidia/llama-3.1-nemotron-70b-instruct', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'gen-1744191323-N0aZy5UyzpOYfRwYbik3', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': {'content': [], 'refusal': []}, 'message': {'role': 'assistant', 'content': "Hello!\n\nHow can I assist you today? Do you have:\n\n1. **A question** on a specific topic you'd like answered?\n2. **A problem** you're trying to solve and need help with?\n3. **A topic** you'd like to **discuss**?\n4. **A game or activity** in mind (e.g., trivia, word games, storytelling)?\n5. **Something else** on your mind (feel free to surprise me)?\n\nPlease respond with a number or describe what's on your mind, and I'll do my best to help!", 'refusal': None}}], 'created': 1744191323, 'model': 'nvidia/llama-3.1-nemotron-70b-instruct', 'usage': {'prompt_tokens': 11, 'completion_tokens': 78, 'total_tokens': 89}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.2-3b-instruct-turbo.md # Llama-3.2-3B-Instruct-Turbo

This documentation is valid for the following list of our models:

  • meta-llama/Llama-3.2-3B-Instruct-Turbo
Try in Playground
## Model Overview A large language model (LLM) optimized for instruction-following tasks, striking a balance between computational efficiency and high-quality performance. It excels in multilingual tasks, offering a lightweight solution without compromising on quality. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/Llama-3.2-3B-Instruct-Turbo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/Llama-3.2-3B-Instruct-Turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/Llama-3.2-3B-Instruct-Turbo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/Llama-3.2-3B-Instruct-Turbo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npQaJb3-4pPsy7-92da7b401ffd5eea', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'tool_calls': []}}], 'created': 1744206709, 'model': 'meta-llama/Llama-3.2-3B-Instruct-Turbo', 'usage': {'prompt_tokens': 5, 'completion_tokens': 1, 'total_tokens': 6}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.3-70b-instruct-turbo.md # Llama-3.3-70B-Instruct-Turbo

This documentation is valid for the following list of our models:

  • meta-llama/Llama-3.3-70B-Instruct-Turbo
Try in Playground
## Model Overview An optimized language model designed for efficient text generation with advanced features and multilingual support. Specifically tuned for instruction-following tasks, making it suitable for applications requiring conversational capabilities and task-oriented responses. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/Llama-3.3-70B-Instruct-Turbo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/Llama-3.3-70B-Instruct-Turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/Llama-3.3-70B-Instruct-Turbo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/Llama-3.3-70B-Instruct-Turbo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npQ5s8C-2j9zxn-92d9f3c84a529790', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello. It's nice to meet you. Is there something I can help you with or would you like to chat?", 'tool_calls': []}}], 'created': 1744201161, 'model': 'meta-llama/Llama-3.3-70B-Instruct-Turbo', 'usage': {'prompt_tokens': 67, 'completion_tokens': 46, 'total_tokens': 113}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.3-70b-versatile.md # Llama-3.3-70B-Versatile

This documentation is valid for the following list of our models:

  • meta-llama/llama-3.3-70b-versatile
Try in Playground
## Model Overview An advanced multilingual large language model with 70 billion parameters, optimized for diverse NLP tasks. It delivers high performance across benchmarks while remaining efficient for a wide range of applications. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/llama-3.3-70b-versatile"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"meta-llama/llama-3.3-70b-versatile"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/llama-3.3-70b-versatile", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/llama-3.3-70b-versatile', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npQ5s8C-2j9zxn-92d9f3c84a529790', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello. It's nice to meet you. Is there something I can help you with or would you like to chat?", 'tool_calls': []}}], 'created': 1744201161, 'model': 'meta-llama/Llama-3.3-70B-Instruct-Turbo', 'usage': {'prompt_tokens': 67, 'completion_tokens': 46, 'total_tokens': 113}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-4-maverick.md # Llama-4-maverick

This documentation is valid for the following list of our models:

  • meta-llama/llama-4-maverick
Try in Playground
## Model Overview A 17 billion active parameter model with 128 experts, is the best multimodal model in its class, beating GPT-4o and Gemini 2.0 Flash on a wide range of common benchmarks, while achieving comparable results to the new DeepSeek v3 on reasoning and coding—with less than half the number of active parameters. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/llama-4-maverick"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/llama-4-maverick"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/llama-4-maverick", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/llama-4-maverick', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npXgTRD-28Eivz-92e226847aa70d87', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How are you today? Is there something I can help you with or would you like to chat?', 'tool_calls': []}}], 'created': 1744287125, 'model': 'meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8', 'usage': {'prompt_tokens': 6, 'completion_tokens': 41, 'total_tokens': 47}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-4-scout.md # Llama-4-scout

This documentation is valid for the following list of our models:

  • meta-llama/llama-4-scout
Try in Playground
## Model Overview A 17 billion active parameter model with 16 experts, is the best multimodal model in the world in its class and is more powerful than all previous generation Llama models. Additionally, the model offers an industry-leading context window of 1M and delivers better results than [Gemma 3](https://docs.aimlapi.com/api-references/text-models-llm/google/gemma-3), Gemini 2.0 Flash-Lite, and Mistral 3.1 on a wide range of common benchmarks. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/llama-4-scout"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/llama-4-scout"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/llama-4-scout", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/llama-4-scout', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npXpsYC-2j9zxn-92e24e9e0c97d74d', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?", 'tool_calls': []}}], 'created': 1744288767, 'model': 'meta-llama/Llama-4-Scout-17B-16E-Instruct', 'usage': {'prompt_tokens': 4, 'completion_tokens': 30, 'total_tokens': 34}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/moderation-safety-models/meta/llama-guard-3-11b-vision-turbo.md # Llama-Guard-3-11B-Vision-Turbo {% hint style="info" %} This documentation is valid for the following list of our models: * `meta-llama/Llama-Guard-3-11B-Vision-Turbo` {% endhint %} ## Model Overview 11B Llama 3.2 model fine-tuned for content safety, detecting harmful multimodal prompts and text in image reasoning use cases. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## Submit a request ### API Schema {% openapi src="" path="/v1/chat/completions" method="post" %} [Llama-Guard-3-11B-Vision-Turbo.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-e9cb0fe4989d6627a88a8edc5b94fe80a8c2c564%2FLlama-Guard-3-11B-Vision-Turbo.json?alt=media\&token=debc8719-1d9e-4a4f-b527-e6ed7859bab8) {% endopenapi %} --- # Source: https://docs.aimlapi.com/api-references/moderation-safety-models/meta/llamaguard-2-8b.md # LlamaGuard-2-8b {% hint style="info" %} This documentation is valid for the following list of our models: * `meta-llama/LlamaGuard-2-8b` {% endhint %} ## Model Overview An 8B-parameter Llama 3-based safeguard model, designed for content classification in LLM inputs (prompt classification) and responses (response classification), similar to Llama Guard. Functioning as an LLM, it generates text outputs that indicate whether a given prompt or response is safe or unsafe, and if deemed unsafe, it specifies the violated content categories. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## Submit a request ### API Schema {% openapi src="" path="/v1/chat/completions" method="post" %} [LlamaGuard-2-8b.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-5a0f97b06bec2f52a15ebe9c45d5bf768ce7b482%2FLlamaGuard-2-8b.json?alt=media\&token=beac894d-483e-43e2-ae0a-1079b738bee5) {% endopenapi %} --- # Source: https://docs.aimlapi.com/api-references/video-models/ltxv/ltxv-2-fast.md # ltxv-2-fast {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `ltxv/ltxv-2-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 6-, 8-, and 10-second videos (up to 4K resolution) with detailed visuals and audio. Runs a little faster while delivering video of somewhat lower quality than [LTXV 2](https://docs.aimlapi.com/api-references/video-models/ltxv/ltxv-2). ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["ltxv/ltxv-2-fast"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[6,8,10]},"resolution":{"type":"string","enum":["1080p","1440p","2160p"],"default":"1080p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["16:9"],"default":"16:9","description":"The aspect ratio of the generated video."},"fps":{"type":"integer","description":"Frames per second of the generated video.","enum":[25,50]},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt"],"title":"ltxv/ltxv-2-fast"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "ltxv/ltxv-2-fast", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. He's roaring: WHERE ARE MY TREASURES?", "duration": 6 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "active", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "ltxv/ltxv-2-fast", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: "5", }); const url = new URL(`${baseUrl}/generate/video/google/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/google/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'fb77b923-55da-4e4d-9519-5c0018088c52:ltxv/ltxv-2-fast', 'status': 'queued', 'meta': {'usage': {'tokens_used': 504000}}} Generation ID: fb77b923-55da-4e4d-9519-5c0018088c52:ltxv/ltxv-2-fast Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'fb77b923-55da-4e4d-9519-5c0018088c52:ltxv/ltxv-2-fast', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/tiger/qo7CU4MIQp7MqapedpsnK_E4p6CCTs.mp4'}} ``` {% endcode %}
**Processing time**: 58.9 sec. **Generated Video** (1920x1080, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/ltxv/ltxv-2.md # ltxv-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `ltxv/ltxv-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 6-, 8-, and 10-second videos (up to 4K resolution) with detailed visuals and audio. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["ltxv/ltxv-2"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[6,8,10]},"resolution":{"type":"string","enum":["1080p","1440p","2160p"],"default":"1080p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["16:9"],"default":"16:9","description":"The aspect ratio of the generated video."},"fps":{"type":"integer","description":"Frames per second of the generated video.","enum":[25,50]},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt"],"title":"ltxv/ltxv-2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "ltxv/ltxv-2", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. He's roaring: WHERE ARE MY TREASURES?", "duration": 6 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "active", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "ltxv/ltxv-2", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. `, duration: 6, }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "active", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'fee691ef-744b-41ed-8a78-24372b76fe9a:ltxv/ltxv-2', 'status': 'queued', 'meta': {'usage': {'tokens_used': 756000}}} Generation ID: fee691ef-744b-41ed-8a78-24372b76fe9a:ltxv/ltxv-2 Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'fee691ef-744b-41ed-8a78-24372b76fe9a:ltxv/ltxv-2', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/lion/pXmuP9RkFdx_UcYIe0Cxk_FOlqvATP.mp4'}} ``` {% endcode %}
**Processing time**: 1 min 10 sec. **Generated Video** (1920x1080, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/ltxv.md # LTXV - [ltxv-2](/api-references/video-models/ltxv/ltxv-2.md) - [ltxv-2-fast](/api-references/video-models/ltxv/ltxv-2-fast.md) --- # Source: https://docs.aimlapi.com/api-references/video-models/luma-ai/luma-ai-v2.md # Luma Ray 1.6 (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `luma/ray-1-6` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Overview The Luma AI Dream Machine API allows developers to generate and extend AI-generated videos based on text prompts. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves making two sequential API calls: * The first one is for creating and sending a video generation task to the server (returns a generation ID). This can be either a generation from a reference image/prompt or a video extension operation that adds length to an existing video. * The second one is for requesting the generated or extended video from the server using the generation ID received from the first endpoint. Within this API call, you can use either the standard endpoint to retrieve the generated/extended video or a special endpoint to request multiple generations at once. Below, you can find three corresponding API schemas and examples for all endpoint calls.
## API Schemas ### Generate video `loop` parameter controls if the generated video will be looped. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["luma/ray-1-6"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["1:1","16:9","9:16","4:3","3:4","21:9","9:21"],"default":"16:9","description":"The aspect ratio of the generated video."},"keyframes":{"type":"object","properties":{"frame0":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"]},{"type":"object","properties":{"type":{"type":"string","enum":["generation"]},"id":{"type":"string","format":"uuid"}},"required":["type","id"]},{"nullable":true}]},"frame1":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"]},{"type":"object","properties":{"type":{"type":"string","enum":["generation"]},"id":{"type":"string","format":"uuid"}},"required":["type","id"]},{"nullable":true}]}},"description":"Keyframes for image-to-video, extend, or interpolate"},"loop":{"type":"boolean","default":false,"description":"Whether to loop the video"}},"required":["model","prompt"],"title":"luma/ray-1-6"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch generation After sending a request for video generation, this task is added to the queue. Based on the service's load, the generation can be completed in seconds or take a bit more. ## GET /v2/generate/video/luma-ai/generation > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}}},"paths":{"/v2/generate/video/luma-ai/generation":{"get":{"operationId":"LumaAiControllerV2_fetchGeneration_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}},{"name":"state","required":false,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":""}},"tags":["Luma AI"]}}}} ``` ### Example: Fetch Single Generation For example, if you are waiting for video dreaming (when the video is popped from the queue and generation is in processing), then you can send the following request: {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests def main(): response = requests.get( "https://api.aimlapi.com/v2/generate/video/luma-ai/generation", params={ "generation_id": "755f9bbb-d99b-4880-992b-f05244ddba61", "status": "dreaming" }, headers={ "Authorization": "Bearer ", "Content-Type": "application/json", }, ) response.raise_for_status() data = response.json() print("Generation:", data) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const main = async () => { const url = new URL('https://api.aimlapi.com/v2/generate/video/luma-ai/generation'); url.searchParams.set('generation_id', '755f9bbb-d99b-4880-992b-f05244ddba61'); url.searchParams.set('state', 'dreaming'); const data = await fetch(url, { method: 'GET', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, }).then((res) => res.json()); console.log('Generation:', data); }; main(); ``` {% endcode %} {% endtab %} {% endtabs %} ### Fetch Multiple Generations Instead of using the `generation_id` parameter, you will pass `generation_ids`, which can be an array of IDs. This parameter can also accept IDs separated by commas. ## GET /v2/generate/video/luma-ai/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}}},"paths":{"/v2/generate/video/luma-ai/generations":{"get":{"operationId":"LumaAiControllerV2_fetchGenerations_v2","parameters":[{"name":"generation_ids","required":true,"in":"query","schema":{"anyOf":[{"type":"array","items":{"type":"string","format":"uuid"},"minItems":1},{"type":"string"}]}},{"name":"status","required":false,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":""}},"tags":["Luma AI"]}}}} ``` ### Example: Fetch Multiple Generations {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests def main(): response = requests.get( "https://api.aimlapi.com/v2/generate/video/luma-ai/generations", params={ "generation_ids[]": "755f9bbb-d99b-4880-992b-f05244ddba61", "status": "streaming", }, headers={ "Authorization": "Bearer ", "Content-Type": "application/json", }, ) response.raise_for_status() data = response.json() print("Generation:", data) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const main = async () => { const url = new URL('https://api.aimlapi.com/v2/generate/video/luma-ai/generations'); url.searchParams.set('generation_ids[]', '755f9bbb-d99b-4880-992b-f05244ddba61'); url.searchParams.set('state', 'dreaming'); const data = await fetch(url, { method: 'GET', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, }).then((res) => res.json()); console.log('Generation:', data); }; main(); ``` {% endcode %} {% endtab %} {% endtabs %} ### Example: Fetch Multiple Generations {% hint style="info" %} Ensure you replace `` with your actual API key before running the code. {% endhint %} {% tabs %} {% tab title="Python" %} ```python import requests def main(): url = "https://api.aimlapi.com/v2/generate/video/luma-ai/generation" payload = { "prompt": "Flying jellyfish", "aspect_ratio": "16:9" } headers = { "Authorization": "Bearer ", "Content-Type": "application/json" } response = requests.post(url, json=payload, headers=headers) print("Generation:", response.json()) if __name__ == "__main__": main() ``` {% endtab %} {% tab title="JavaScript" %} ```javascript const main = async () => { const response = await fetch('https://api.aimlapi.com/v2/generate/video/luma-ai/generation', { method: 'POST', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ prompt: 'A jellyfish in the ocean', aspect_ratio: '19:9', loop: false }), }).then((res) => res.json()); console.log('Generation:', response); }; main(); ``` {% endtab %} {% endtabs %} ### Extend video You can extend a video using an existing video you generated before (using its generation ID) or by using an image (via URL). The extension can be done by appending to or prepending from the original content. The `keywords` parameter controls the following extensions. It can include parameters for defining frames: * **first frame** (`frame0`) * **last frame** (`frame1`) For example, if you want to use an image as a reference for a frame: ```json { "keyframes": { "frame0": { "type": "image", "url": "https://example.com/image1.png" } } } ``` Or, in the case of using a previously generated video: ```json { "keyframes": { "frame1": { "type": "generation", "id": "0f3ea4aa-10e7-4dae-af0b-263ab4ac45f9" } } } ``` ## POST /v2/generate/video/luma-ai/generation > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"LumaAi.v2.CreateGenerationPayload":{"type":"object","properties":{"generation_type":{"type":"string","nullable":true,"enum":["video"]},"prompt":{"type":"string"},"aspect_ratio":{"type":"string","enum":["1:1","16:9","9:16","4:3","3:4","21:9","9:21"]},"loop":{"type":"boolean","default":false},"keyframes":{"type":"object","nullable":true,"properties":{"frame0":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["generation"]},"id":{"type":"string","format":"uuid"}},"required":["type","id"],"additionalProperties":false},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"],"additionalProperties":false},{"nullable":true}]},"frame1":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["generation"]},"id":{"type":"string","format":"uuid"}},"required":["type","id"],"additionalProperties":false},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"],"additionalProperties":false},{"nullable":true}]}}},"callback_url":{"type":"string","nullable":true,"format":"uri"},"model":{"type":"string","enum":["ray-1-6","ray-2","ray-flash-2"],"default":"ray-2"},"resolution":{"type":"string","nullable":true,"enum":["540p","720p","1080p","4k"]},"duration":{"type":"string","nullable":true,"enum":["5s","9s"]}},"required":["prompt"],"additionalProperties":false}}},"paths":{"/v2/generate/video/luma-ai/generation":{"post":{"operationId":"LumaAiControllerV2_createGeneration_v2","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/LumaAi.v2.CreateGenerationPayload"}}}},"responses":{"201":{"description":""}},"tags":["Luma AI"]}}}} ``` ## Examples {% hint style="warning" %} Ensure you replace `` with your actual API key before running the code. {% endhint %} ### Extension with the Image {% tabs %} {% tab title="Python" %} ```python import requests def main() url = "https://api.aimlapi.com/v2/generate/video/luma-ai/generation" headers = { "Authorization": "Bearer ", "Content-Type": "application/json" } payload = { "prompt": "Flying jellyfish", "aspect_ratio": "16:9", "keyframes": { "frame0": { "type": "image", "url": "https://example.com/image1.png" } } } response = requests.post(url, json=payload, headers=headers) print("Generation:", response.json()) if __name__ == "__main__": main() ``` {% endtab %} {% tab title="JavaScript" %} ```javascript const main = async () => { const response = await fetch('https://api.aimlapi.com/v2/generate/video/luma-ai/generation', { method: 'POST', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ prompt: 'A jellyfish in the ocean', aspect_ratio: '19:9', keyframes: { frame0: { type: 'image', url: 'https://example.com/image1.png', }, }, }), }).then((res) => res.json()); console.log('Generation:', response); }; main(); ``` {% endtab %} {% endtabs %} ### Extension with the Generation {% tabs %} {% tab title="Python" %} ```python import requests def main() url = "https://api.aimlapi.com/v2/generate/video/luma-ai/generation" headers = { "Authorization": "Bearer ", "Content-Type": "application/json" } payload = { "prompt": "Flying jellyfish", "aspect_ratio": "16:9", "keyframes": { "frame0": { "type": "generation", "id": "0f3ea4aa-10e7-4dae-af0b-263ab4ac45f9" } } } response = requests.post(url, json=payload, headers=headers) print("Generation:", response.json()) if __name__ == "__main__": main() ``` {% endtab %} {% tab title="JavaScript" %} ```javascript const main = async () => { const response = await fetch('https://api.aimlapi.com/v2/generate/video/luma-ai/generation', { method: 'POST', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ prompt: 'A jellyfish in the ocean', aspect_ratio: '19:9', keyframes: { frame0: { type: 'generation', id: '0f3ea4aa-10e7-4dae-af0b-263ab4ac45f9', }, }, }), }).then((res) => res.json()); console.log('Generation:', response); }; main(); ``` {% endtab %} {% endtabs %} --- # Source: https://docs.aimlapi.com/api-references/video-models/luma-ai.md # Luma AI The Luma AI Dream Machine API allows developers to generate, retrieve, and extend AI-generated content using a variety of inputs. This API is particularly useful for creative applications, such as generating and extending video from text prompts. --- # Source: https://docs.aimlapi.com/api-references/video-models/luma-ai/luma-ray-2.md # Luma Ray 2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `luma/ray-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model generates up to 9-second clips at 4K, compared to lower resolutions and shorter durations in [Ray 1.6](https://docs.aimlapi.com/api-references/video-models/luma-ai/broken-reference). You can specify the first and last frames as images or extend previously generated videos by passing their generation IDs. Looped videos are also supported. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["luma/ray-2"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["540p","720p","1080p","4k"],"default":"1080p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["1:1","16:9","9:16","4:3","3:4","21:9","9:21"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,9],"default":"5"},"keyframes":{"type":"object","properties":{"frame0":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"]},{"type":"object","properties":{"type":{"type":"string","enum":["generation"]},"id":{"type":"string","format":"uuid"}},"required":["type","id"]},{"nullable":true}]},"frame1":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"]},{"type":"object","properties":{"type":{"type":"string","enum":["generation"]},"id":{"type":"string","format":"uuid"}},"required":["type","id"]},{"nullable":true}]}},"description":"Keyframes for image-to-video, extend, or interpolate"},"loop":{"type":"boolean","default":false,"description":"Whether to loop the video"}},"required":["model","prompt"],"title":"luma/ray-2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"get":{"operationId":"VideoControllerV2_pollVideo_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":"Successfully generated video","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Video.v2.PollVideoResponseDTO"}}}}},"tags":["Video Models"]}}},"components":{"schemas":{"Video.v2.PollVideoResponseDTO":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."},"duration":{"type":"number","nullable":true,"description":"The duration of the video."}},"required":["url"]},"duration":{"type":"number","nullable":true,"description":"The duration of the video."},"error":{"nullable":true,"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "luma/ray-2", "prompt": "The camera moves down, dives underwater and moves through a dark, moody world of greenish light and drifting plants. Giant white koi fish emerge from the shadows and turn curiously toward the camera as it passes, their scales shimmering faintly in the murky depths.", "keyframes":{ "frame0": { "type": "image", "url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/landscape.jpg", }, "frame1": { "type": "image", "url": "https://cdn.aimlapi.com/assets/content/white-fish.png", }, }, "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "luma/ray-2", prompt: "The camera moves down, dives underwater and moves through a dark, moody world of greenish light and drifting plants. Giant white koi fish emerge from the shadows and turn curiously toward the camera as it passes, their scales shimmering faintly in the murky depths.", keyframes: { frame0: { type: "image", url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/landscape.jpg" }, frame1: { type: "image", url: "https://cdn.aimlapi.com/assets/content/white-fish.png" } }, duration: "5" }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 7c880e75-9892-4238-8464-49cb0c6deabd:luma/ray-2 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '7c880e75-9892-4238-8464-49cb0c6deabd:luma/ray-2', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/luma/dream_machine/25f7f772-d8f4-494e-9cdf-ab5a2c7ce3fe/2a50c77c-2e44-4bfd-915b-9df61f5ff202_resultdbe5e2f21db1effd.mp4'}} ``` {% endcode %}
**Processing time**: \~1 min 48 sec. **Original** (1280x720, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/luma-ai/luma-ray-flash-2.md # Luma Ray Flash 2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `luma/ray-flash-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model generates up to 9-second clips at 4K, compared to lower resolutions and shorter durations in [Ray 1.6](https://docs.aimlapi.com/api-references/video-models/luma-ai/broken-reference). You can specify the first and last frames as images or extend previously generated videos by passing their generation IDs. Looped videos are also supported.\ This version is nearly twice as fast as [Luma Ray 2](https://docs.aimlapi.com/api-references/video-models/luma-ai/luma-ray-2). ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["luma/ray-flash-2"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["540p","720p","1080p","4k"],"default":"1080p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["1:1","16:9","9:16","4:3","3:4","21:9","9:21"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,9],"default":"5"},"keyframes":{"type":"object","properties":{"frame0":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"]},{"type":"object","properties":{"type":{"type":"string","enum":["generation"]},"id":{"type":"string","format":"uuid"}},"required":["type","id"]},{"nullable":true}]},"frame1":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string","format":"uri"}},"required":["type","url"]},{"type":"object","properties":{"type":{"type":"string","enum":["generation"]},"id":{"type":"string","format":"uuid"}},"required":["type","id"]},{"nullable":true}]}},"description":"Keyframes for image-to-video, extend, or interpolate"},"loop":{"type":"boolean","default":false,"description":"Whether to loop the video"}},"required":["model","prompt"],"title":"luma/ray-flash-2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"get":{"operationId":"VideoControllerV2_pollVideo_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":"Successfully generated video","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Video.v2.PollVideoResponseDTO"}}}}},"tags":["Video Models"]}}},"components":{"schemas":{"Video.v2.PollVideoResponseDTO":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."},"duration":{"type":"number","nullable":true,"description":"The duration of the video."}},"required":["url"]},"duration":{"type":"number","nullable":true,"description":"The duration of the video."},"error":{"nullable":true,"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "luma/ray-flash-2", "prompt": "The camera moves down and gradually dives underwater and moves through a dark, moody world of greenish light and drifting plants. Giant white koi fish emerge from the shadows and turn curiously toward the camera as it passes, their scales shimmering faintly in the murky depths.", "keyframes":{ "frame0": { "type": "image", "url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/landscape.jpg", }, "frame1": { "type": "image", "url": "https://cdn.aimlapi.com/assets/content/white-fish.png", }, }, "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "luma/ray-flash-2", prompt: "The camera moves down and gradually dives underwater and moves through a dark, moody world of greenish light and drifting plants. Giant white koi fish emerge from the shadows and turn curiously toward the camera as it passes, their scales shimmering faintly in the murky depths.", keyframes: { frame0: { type: "image", url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/landscape.jpg" }, frame1: { type: "image", url: "https://cdn.aimlapi.com/assets/content/white-fish.png" } }, duration: "5" }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: df551652-1bbe-4d38-9c99-d0ecb69db192:luma/ray-flash-2 Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'df551652-1bbe-4d38-9c99-d0ecb69db192:luma/ray-flash-2', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/luma/dream_machine/8523b741-f975-4a8b-84d0-05c51f1cbb04/f9cff453-d98f-4243-a419-164f7c228c38_result76b52833979928ee.mp4'}} ``` {% endcode %}
**Processing time**: \~ 1 min. **Original** (1280x720, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/music-models/google/lyria-2.md # Lyria 2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/lyria2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced audio generation model designed to create high-quality audio tracks from textual prompts. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#full-example-generating-and-retrieving-the-video-from-the-server) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. {% hint style="success" %} Generating a music piece using this model involves sequentially calling two endpoints: * The first one is for creating and sending a music generation task to the server (returns a generation ID). * The second one is for requesting the generated piece from the server using the generation ID received from the first endpoint. The code example combines both endpoint calls. {% endhint %} :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Provide your instructions via the `prompt` parameter. The model will use them to generate a musical composition. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `prompt` is a required parameter for this model (and we’ve already filled it in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas) ("Generate a music sample"), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds 40 seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas ### Generate a music sample This endpoint creates and sends a music generation task to the server — and returns a generation ID and the task status. ## POST /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/generate/audio":{"post":{"operationId":"_v2_generate_audio","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/lyria2"]},"prompt":{"type":"string","description":"Lyrics with optional formatting. You can use a newline to separate each line of lyrics. You can use two newlines to add a pause between lines. You can use double hash marks (##) at the beginning and end of the lyrics to add accompaniment. Maximum 600 characters."},"negative_prompt":{"type":"string","description":"A description of what to exclude from the generated audio"},"seed":{"type":"integer","minimum":0,"description":"A seed for deterministic generation. If provided, the model will attempt to produce the same audio given the same prompt and other parameters."}},"required":["model","prompt"],"title":"google/lyria2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated music sample from the server After sending a request for music generation, this task is added to the queue. Based on the service's load, the generation can be completed in 30-40 seconds or take a bit more. ## GET /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/generate/audio":{"get":{"operationId":"_v2_generate_audio","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Quick Code Example Here is an example of generation an audio file based on a prompt using the music model **Lyria 2**.
How it works As an example, we will generate a song using the new Google's model **Lyria 2**. As you can verify in its API Schemas above, this model accepts a prompt as input—extracting information about its vocals and instruments for use in the generation process. We generated our prompt in [Chat GPT](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o): `Majestic orchestral film score recorded in a top-tier London studio. A 100-piece orchestra delivers sweeping, cinematic music with rich emotional depth. The composition features soaring themes, dynamic contrasts, and complex harmonies. Expect powerful percussion, expressive strings, and prominent French horns and timpani. The arrangement emphasizes a dramatic narrative arc with intricate orchestrations and a profound, awe-inspiring atmosphere.` A notable feature of our audio and video models is that uploading the prompt or sample, generating the content, and retrieving the final file from the server are handled through separate API calls. *(AIML API tokens are only consumed during the first step—i.e., the actual content generation.)* We’ve written a complete code example that sequentially calls both endpoints — you can view and copy it below. Don’t forget to replace `` with your actual AIML API Key from your [account](https://aimlapi.com/app/keys)! The structure of the code is simple: there are two separate functions for calling each endpoint, and a main function that orchestrates everything. Execution starts automatically from `main()`. It first runs the function that creates and sends a music generation task to the server — this is where you pass your **prompt** describing the desired musical fragment. This function returns a **generation ID** and the initial **task status**: {% code overflow="wrap" %} ```javascript Generation: {'id': 'ac94b938-a53a-483a-bef3-2bea9dd12bb8:lyria2', 'status': 'queued'} ``` {% endcode %} This indicates that the file upload and our generation has been queued on the server (which took 4.5 seconds in our case). Next, `main()` launches the second function — the one that checks the task status and, once ready, retrieves the download URL from the server. This second function is called in a loop every 10 seconds. During execution, you’ll see messages in the output: * If the file is not yet ready: ```json5 Still waiting... Checking again in 10 seconds. ``` * Once the file is ready, a completion message appears with the download info. In our case, after three reruns of the second code block (waiting a total of about 30-40 seconds), we saw the following output: {% code overflow="wrap" %} ```javascript Generation complete:\n {'id': 'ac94b938-a53a-483a-bef3-2bea9dd12bb8:lyria2', 'status': 'completed', 'audio_file': {'url': 'https://cdn.aimlapi.com/eagle/files/lion/5N4F_QWb5K8rDSHfpUN0S_output.wav', 'content_type': 'audio/wav', 'file_name': 'output.wav', 'file_size': 6291544}} ``` {% endcode %} As you can see, the `'status'` is now `'completed'`, and further in the output line, we have a URL where the generated audio file can be downloaded. *** Listen to the track we generated below the code and response blocks.
{% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import time import requests # Insert your AI/ML API key instead of : aimlapi_key = '' # Creating and sending an audio generation task to the server (returns a generation ID) def generate_audio(): url = "https://api.aimlapi.com/v2/generate/audio" payload = { "model": "google/lyria2", "prompt": ''' Majestic orchestral film score recorded in a top-tier London studio. A full-scale symphony orchestra delivers sweeping, cinematic music with rich emotional depth. The composition features soaring themes, dynamic contrasts, and complex harmonies. Expect powerful percussion, expressive strings, and prominent French horns and timpani. The arrangement emphasizes a dramatic narrative arc with intricate orchestrations and a profound, awe-inspiring atmosphere. ''' } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print("Generation:", response_data) return response_data # Requesting the result of the generation task from the server using the generation_id: def retrieve_audio(gen_id): url = "https://api.aimlapi.com/v2/generate/audio" params = { "generation_id": gen_id, } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.get(url, params=params, headers=headers) return response.json() # This is the main function of the program. From here, we sequentially call the audio generation and then repeatedly request the result from the server every 10 seconds: def main(): generation_response = generate_audio() gen_id = generation_response.get("id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = retrieve_audio(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 10 seconds.") time.sleep(10) else: print("Generation complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AI/ML API key instead of : const API_KEY = ''; async function generateAudio() { const url = 'https://api.aimlapi.com/v2/generate/audio'; const payload = { model: 'google/lyria2', prompt: ` Majestic orchestral film score recorded in a top-tier London studio. A full-scale symphony orchestra delivers sweeping, cinematic music with rich emotional depth. The composition features soaring themes, dynamic contrasts, and complex harmonies. Expect powerful percussion, expressive strings, and prominent French horns and timpani. The arrangement emphasizes a dramatic narrative arc with intricate orchestrations and a profound, awe-inspiring atmosphere. ` }; const response = await fetch(url, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }); if (!response.ok) { console.error(`Error: ${response.status} - ${await response.text()}`); return null; } const data = await response.json(); console.log('Generation:', data); return data; } async function retrieveAudio(generationId) { const url = `https://api.aimlapi.com/v2/generate/audio?generation_id=${generationId}`; const response = await fetch(url, { method: 'GET', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' } }); if (!response.ok) { console.error(`Error: ${response.status} - ${await response.text()}`); return null; } return await response.json(); } async function main() { const generationResponse = await generateAudio(); if (!generationResponse || !generationResponse.id) { console.error('No generation ID received.'); return; } const genId = generationResponse.id; const timeout = 600000; // 10 minutes const interval = 10000; // 10 seconds const start = Date.now(); const intervalId = setInterval(async () => { if (Date.now() - start > timeout) { console.log('Timeout reached. Stopping.'); clearInterval(intervalId); return; } const result = await retrieveAudio(genId); if (!result) { console.error('No response from API.'); clearInterval(intervalId); return; } const status = result.status; if (['generating', 'queued'].includes(status)) { console.log(`Status: ${status}. Checking again in 10 seconds.`); } else { console.log('Generation complete:\n', result); clearInterval(intervalId); } }, interval); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: {'id': 'ac94b938-a53a-483a-bef3-2bea9dd12bb8:lyria2', 'status': 'queued'} Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Generation complete:\n {'id': 'ac94b938-a53a-483a-bef3-2bea9dd12bb8:lyria2', 'status': 'completed', 'audio_file': {'url': 'https://cdn.aimlapi.com/eagle/files/lion/5N4F_QWb5K8rDSHfpUN0S_output.wav', 'content_type': 'audio/wav', 'file_name': 'output.wav', 'file_size': 6291544}} ``` {% endcode %}
Listen to the track we generated: {% embed url="" %} `"Majestic orchestral film score recorded in a top-tier London studio. A full-scale symphony orchestra delivers sweeping, cinematic music with rich emotional depth. The composition features soaring themes, dynamic contrasts, and complex harmonies. Expect powerful percussion, expressive strings, and prominent French horns and timpani. The arrangement emphasizes a dramatic narrative arc with intricate orchestrations and a profound, awe-inspiring atmosphere."` {% endembed %} --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/minimax/m1.md # m1

This documentation is valid for the following list of our models:

  • minimax/m1
Try in Playground
## Model Overview The world's first open-weight, large-scale hybrid-attention reasoning model. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["minimax/m1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":1,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"minimax/m1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"minimax/m1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'minimax/m1', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "04a9be008b12ad5eec78791d8aebe36f", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How can I assist you today?" } } ], "created": 1750764288, "model": "MiniMax-M1", "usage": { "prompt_tokens": 389, "completion_tokens": 910, "total_tokens": 1299 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-1.md # m2-1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/m2-1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A text model optimized for code generation and refactoring across multiple languages. Designed for fast, concise developer workflows. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["minimax/m2-1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"minimax/m2-1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"minimax/m2-1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'minimax/m2-1', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "059a94ddc4a63fc9565b63027fe82a4e", "choices": [ { "finish_reason": "stop", "index": 0, "message": { "content": "\nThe user has simply said \"Hello\". This is a greeting, so I should respond in a friendly, welcoming manner and offer to help them with whatever they might need. Since I'm called MiniMax-M2.1 and built by MiniMax, I should introduce myself appropriately.\n\n\nHello! 👋\n\nI'm MiniMax-M2.1, an AI assistant built by MiniMax. I'm here to help you with a wide range of tasks including:\n\n- **Answering questions** on various topics\n- **Writing and editing** content (emails, essays, code, etc.)\n- **Problem-solving** and explaining complex concepts\n- **Brainstorming** ideas and creative projects\n- **Analysis** of text, data, or documents\n- **Learning** new things together\n\nWhat can I help you with today?", "role": "assistant", "name": "MiniMax AI", "audio_content": "" } } ], "created": 1766547933, "model": "MiniMax-M2.1", "object": "chat.completion", "usage": { "total_tokens": 206, "total_characters": 0, "prompt_tokens": 39, "completion_tokens": 167, "completion_tokens_details": { "reasoning_tokens": 55 }, "prompt_tokens_details": { "cached_tokens": 39 } }, "input_sensitive": false, "output_sensitive": false, "input_sensitive_type": 0, "output_sensitive_type": 0, "output_sensitive_int": 0, "base_resp": { "status_code": 0, "status_msg": "" }, "meta": { "usage": { "credits_used": 447 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/embedding-models/together-ai/m2-bert-80m-retrieval.md # m2-bert-80M-retrieval {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `togethercomputer/m2-bert-80M-32k-retrieval` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model integrates advanced machine learning techniques to excel in searching and retrieving relevant information from vast datasets. With its 8k parameter design, it balances performance and efficiency, making it suitable for applications requiring high-speed data access and analysis. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/embeddings > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Embedding.v1.CreateEmbeddingsResponseDTO":{"type":"object","properties":{"object":{"type":"string","enum":["object"]},"data":{"type":"array","items":{"type":"object","properties":{"object":{"type":"string","enum":["embedding"]},"index":{"type":"number"},"embedding":{"type":"array","items":{"type":"number"}}},"required":["object","index","embedding"]}},"model":{"type":"string"},"usage":{"type":"object","properties":{"total_tokens":{"type":"number","nullable":true}}}},"required":["object","data","model","usage"]}}},"paths":{"/v1/embeddings":{"post":{"operationId":"EmbeddingsController_createEmbeddings_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["togethercomputer/m2-bert-80M-32k-retrieval"]},"input":{"anyOf":[{"type":"string","minLength":1},{"type":"array","items":{"type":"string"},"minItems":1}],"description":"Input text to embed, encoded as a string or array of tokens."}},"required":["model","input"]}}}},"responses":{"200":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Embedding.v1.CreateEmbeddingsResponseDTO"}}}}},"tags":["Embeddings"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="togethercomputer/m2-bert-80M-32k-retrieval"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "togethercomputer/m2-bert-80M-32k-retrieval", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[-0.022518871, -0.05492567, 0.04314954, 0.01908204, 0.032154366, 0.018771874, 0.13725114, -0.014640589, 0.0061788796, 0.017178858, -0.030379634, -0.041357648, -0.03360474, 0.042473987, 0.053178973, -0.005236369, -0.03219526, 0.041428592, 0.06592321, 0.0067112693, -0.023770168, -0.019299848, 0.007993773, -0.05168535, -0.015209434, 0.0017797928, -0.07260841, 0.116862014, -0.08432825, 0.041688, 0.001939922, 0.087542035, 0.00027340185, 0.054711558, -0.024200993, -0.031384196, 0.074323244, 0.07123001, 0.023314048, 0.044129774, 0.046673443, -0.017252263, 0.05124588, 0.011533485, 0.11889566, 0.06285429, 0.008315098, 0.025877435, 0.059764042, 0.019463, 0.04239476, -0.03612913, -0.013776412, 0.013890059, 0.103183284, -0.13443822, 0.04495936, -0.009156373, -0.006672909, 0.078052625, -0.09794928, -0.021736462, 0.028952237, -0.041858006, 0.036441904, -0.018249668, 0.045701273, 0.064397104, 0.047778558, -0.04607775, 0.0613175, 0.017838702, 0.059114438, -0.017155133, -0.08712801, -0.15317874, 0.059551563, -0.0060825604, -0.06012979, 7.0648035e-05, -0.0051896097, -0.06964658, -0.007011863, 0.027934607, -0.034792814, -0.026817398, -0.00048898155, -0.062317953, -0.066565625, -0.075289406, -0.0046270085, 0.04413178, -0.063599475, -0.02996869, 0.06441472, -0.06308936, 0.03392907, 0.011777486, -0.04050071, -0.10732842, -0.01925484, -0.020376364, 0.07858302, -0.06894146, 0.039393075, -0.04308492, 0.024917038, -0.0914849, 0.10169439, -0.028998805, 0.046734434, -0.059912268, -0.029643107, 0.011367351, -0.12327092, 0.0052152043, -0.028607994, -0.011932978, -0.043691877, 0.0194266, 0.030180357, 0.12889579, 0.0754103, -0.03970011, 0.07680547, 0.051464014, 0.01168237, -0.05554915, -0.040402085, -0.111259386, -0.07429493, 0.05731342, -0.020595822, 0.043496504, 0.044175837, 0.020880852, 0.012339997, -0.00298804, 0.05948228, -0.0004091635, 0.021892816, 0.009636341, -0.010041086, 0.079659864, -0.039106905, 0.025148746, 0.025120387, 0.08840679, -0.017683662, -0.005038673, 0.05305955, 0.017893763, 0.050628677, 0.070915446, -0.030047618, 0.03086825, -0.002983781, -0.031904973, -0.08130232, 0.00899561, 0.0032726931, -0.040147297, -0.08352723, -0.1173947, 0.15183096, 0.0061662495, 0.03798979, -0.0077478862, -0.028694391, 0.03062653, 0.06121598, -0.060576487, -0.063433364, 0.007295667, -0.073930494, 0.046088267, 0.017132975, 0.04507851, 0.03084464, 0.053483658, 0.034529164, -0.07918834, 0.055112015, 0.01247878, 0.069280356, 0.020827884, 0.042166162, -0.020410769, 0.026415937, -0.10586825, 0.03401086, -0.11206052, 0.022029875, -0.08723038, 0.03311326, -0.013136459, -0.0043826587, 0.08547261, -0.007686176, 0.01795148, -0.033622082, -0.0038820533, -0.07831721, 0.0046814843, 0.07146545, 0.016800385, -0.030045308, 0.025980774, 0.0651566, -0.0034665945, -0.050702367, 0.03332023, -0.04632355, -0.00829293, -0.012228441, 0.044963043, -0.01864288, -0.021899091, 0.047760963, 0.054829728, 0.07316751, 0.029128728, -0.038002785, -0.021813853, -0.012928496, -0.032905307, 0.049185347, 0.048306346, -0.059512895, -0.008473875, -0.10521597, -0.09854483, -0.053533103, 0.0081416285, 0.048627093, 0.086243056, 0.04579295, 0.00020610541, 0.057024997, 0.0045687314, -0.017045287, 0.06937633, -0.018246774, 0.0030933763, -0.047455445, 0.100754544, 0.009138106, -0.06142859, -0.011727202, 0.05943007, 0.019529304, -0.08178414, 0.04293396, 0.008947786, 0.062455736, -0.0044998615, 0.0067285607, -0.0592153, 0.015219912, 0.017413614, -0.015478038, -0.015715232, 0.015510913, 0.08534412, -0.005082124, 0.01736335, -0.021218432, 0.11955415, -0.033334773, 0.09443946, 0.068228334, -0.015644321, -0.005873024, 0.049607933, 0.01715967, 0.03214081, -0.032750327, 0.091966696, 0.032227907, -0.034452155, -0.025407549, -0.040212154, 0.07745749, 0.0054787705, 0.060588814, 0.00085411756, -0.09548096, -0.028679665, -0.016162518, 0.016654978, -0.0083382875, -0.027987378, -0.043397427, 0.09862202, -0.012673832, 0.022648692, -0.026811635, -0.05019208, -0.024081457, -0.039421905, -0.003121619, -0.027325125, -0.023387887, 0.011899437, -0.0043352665, -0.040094543, 0.043613266, 0.054189432, -0.029557189, 0.036127467, -0.016424673, 0.0048351507, -0.03273681, -0.01763599, 0.006455148, 0.027774762, 0.0965571, -0.013099461, 0.10815064, 0.0644431, 0.032897606, -0.02136664, -0.0030217145, 0.054244217, -0.011225383, -0.045582764, 0.011226498, 0.092165194, 0.020423887, -0.011025544, -0.0896547, -0.008121796, 0.025983114, 0.03126251, 0.010954731, -0.078501426, 0.08812612, -0.17783329, 0.079572335, 0.0366609, 0.024194011, -0.058190376, -0.03911377, 0.029253501, -0.006428917, -0.061678503, -0.058610704, -0.024597418, 0.10173387, -0.010054734, 0.042108692, -0.0037493475, 0.038082212, 0.05548489, 0.04893639, 0.00017219223, -0.026868304, 0.059743445, -0.044876374, 0.022350973, -0.0010833307, 0.005143426, -0.032703284, -0.018576814, -0.015566642, -0.032420512, 0.06200163, 0.010864542, -0.020961571, -0.06874651, 0.00077768305, 0.0062362673, 0.029155679, -0.091916256, -0.0015506713, -0.014262802, 0.012104339, -0.06674989, 0.0778559, 0.103922434, 0.059907734, -0.007881259, -0.07194788, -0.06436307, 0.026487429, 0.07824377, 0.084921435, 0.06936699, -0.019934516, -0.02544156, 0.03439074, 0.033536144, -0.071031325, 0.09915472, 0.04934314, 0.01840032, 0.041281283, 0.019535236, -0.03159787, 0.0028999106, 0.045704037, -0.01994819, 0.05776877, 0.054494046, -0.021583393, -0.006835557, -0.016128188, 0.089669, -0.018075528, -0.031506024, 0.10640809, 0.05311536, 0.07978824, -0.028725374, -0.008919856, -0.08879467, 0.08501562, -0.07716739, 0.024608573, 0.088180654, 0.057911232, 0.17356439, -0.062507294, 0.045779735, -0.060949843, 0.0043140682, 0.0014630527, -0.039740812, 0.059488088, -0.10273114, 0.08456389, 0.017332463, 0.09486501, -0.06817094, 0.10525956, -0.018798547, -0.09653561, -0.0130648725, 0.034245886, -0.07156934, -0.03941032, 0.02632053, -0.009588361, -0.04658321, -0.047364846, -0.033951584, -0.04864003, 0.11221384, -0.019882238, -0.08250822, -0.07023993, 0.08075739, -0.04651175, 0.1277658, -0.059738882, 0.008536227, 0.011508066, 0.01578062, -0.0028480047, 0.1062885, -0.06019234, -0.047270287, -0.09146898, 0.074352525, -0.06831037, 0.049275126, -0.05490411, -0.13465248, -0.14108478, -0.038410764, 0.038202513, -0.01447899, 0.124756694, 0.04916432, 0.052207407, -0.05710036, -0.006149548, -0.006929, 0.034152646, 0.057394102, -0.05677231, 0.048502147, 0.004141689, -0.042022724, 0.046504058, 0.06590855, -0.05777849, 0.019975701, -0.024726968, -0.08754647, 0.018082233, 0.061266966, 0.072870076, -0.0778968, 0.074223295, 0.006374571, 0.130426, 0.0226941, 0.0043493654, -0.04955937, 0.0036248127, 0.099575765, -0.088754416, -0.010983745, -0.009471881, -0.036653765, -0.0009121259, 0.10609331, -0.032778006, 0.086380266, -0.04930271, -0.02300316, -0.04129246, -0.008538411, 0.014015781, -0.042881224, -0.047556598, -0.03297384, 0.029435204, -0.0384602, -0.06857866, -0.091861345, 0.07803545, 0.02102974, -0.11948425, 0.024376437, 0.007228921, -0.058134653, -0.0013609343, 0.023084294, -0.057189673, -0.05728298, -0.0484506, -0.0821063, 0.019594098, -0.006331754, -0.059106775, -0.007418993, -0.03072285, 0.060713455, 0.018790137, -0.006208014, 0.079967655, -0.015576144, 0.028650763, -0.002998559, 0.065844886, -0.010050332, 0.0339818, -0.08275663, -0.0031037864, 0.05263937, 0.04846317, 0.05583557, -0.02816636, -0.03598453, 0.019715307, 0.0245313, 0.0009092835, 0.025249347, 0.10045381, -0.045542967, 0.06498728, 0.014768274, -0.04126555, 0.016090501, 0.002490195, 0.09871467, 0.036914844, 0.04130855, -0.03172271, -0.038417645, 0.041601047, -0.051246494, -0.051879317, -0.03654171, -0.0330898, 0.02112689, -0.09168839, 0.035563994, -0.010605103, -0.100304686, 0.10010761, 0.0037157943, 0.07160889, -0.1396608, 0.03641288, -0.017214408, -0.100317664, 0.0062580574, -0.06306692, 0.024881214, -0.04646615, 0.04418978, -0.057057884, -0.081657864, 0.057855934, 0.06479904, 0.09202495, -0.022400219, -0.010139982, -0.03231404, 0.07246991, 0.01929464, -0.07119892, 0.0010536053, 0.049358428, 0.0077756452, 0.036952395, -0.04327241, 0.042407084, 0.03172406, -0.025600178, -0.0673364, 0.0010437515, -0.107536286, -0.04833168, -0.029389678, -0.004706864, 0.031769544, -0.09903898, 0.15186864, 0.07637274, -0.0020486386, 0.046456393, -0.07706689, 0.05339683, 0.03166027, -0.058898803, -0.11018255, 0.014999272, 0.11599336, -0.030609714, -0.022608014, -0.0028181057, 0.008997316, 0.020821538, -0.008648947, 0.030247014, -0.05174877, 0.0006816386, 0.038979083, -0.021107972, -0.05501357, -0.03942976, -0.033877965, -0.049762525, -0.024049241, -0.0596664, 0.025880108, 0.0041145524, -0.050766345, -0.08031096, -0.004088612, -0.053904913, 0.01243612, 0.009684816, 0.05124111, -0.026534285, 0.011635154, 0.05288512, 0.06260319, -0.018155998, -0.027150845, 0.05731768, -0.04091087, 0.080586776, 0.023885174, -0.011623526, 0.06870043, -0.021252401, -0.022097027, -0.07718119, 0.05513044, -0.047646742, -0.012553494, -0.025166888, 0.03182203, -0.021530312, -0.081266604, -0.04483667, -0.0067918906, 0.06554406, 0.0020491984, 0.05770826, 0.013739085, -0.028647613, -0.049964804, -0.026096478, 0.066631645, -0.040899485, -0.009702203, -0.07454616, 0.010306494, -0.029889302, -0.040272836, 0.051952727, -0.01145505, 0.03268713, -0.0093141785, -0.0599903, -0.043513667, 0.056817, 0.04209819, -0.05415263, 0.046982195, 0.050723214, -0.046014734, 0.023149105, 0.029349804, 0.0019439462, 0.011594587, 0.010603982, 0.035190985, 0.041638557, 0.0272645, -0.02095035, 0.067932725, 0.067013115, -0.1120199, 0.028714051, -0.06291958, 0.011970274, 0.042574234, 0.030516073, -0.01871271, -0.064694345, -0.003966169, -0.082643494, 0.06852858, -0.08306689, -0.04162032, -0.0024824948, 0.010162806, 0.016500138, -0.035984974, -0.026145082, -0.060093585, 0.00283806, -0.11145292, 0.018978847, -0.030565565, -0.028125482, -0.0050368947, 0.018238999, -0.027782504, -0.072863534, 0.017784791, -0.03187564, 0.041816197, -0.034067683, 0.020410407, 0.04560171, -0.009920052, -0.012512977, 0.04930977, -0.017679198, 0.049749006, 0.034082506, -0.07823904, 0.035949912, 0.06900245, 0.0141290035, 0.07220982, -0.060865786, 0.00061688427, -0.04459162, 0.045151222], index=0, object='embedding')], model='togethercomputer/m2-bert-80M-32k-retrieval', object='list', usage=None, meta={'usage': {'credits_used': 1}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2.md # m2

This documentation is valid for the following list of our models:

  • minimax/m2
Try in Playground
## Model Overview A high-performance language model optimized for coding and autonomous agent workflows. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["minimax/m2"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":1,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"minimax/m2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"minimax/m2", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'minimax/m2', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "0557b8f7fa197172a75531a82ae6c887", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "\nThe user says \"Hello\". This is a simple greeting. There's no request. According to policy, we respond politely, perhaps ask how we can help. So answer \"Hello! How can I assist you today?\" Should keep tone friendly.\n\nThus final answer.\n\n\nHello! How can I help you today?" } } ], "created": 1762166263, "model": "MiniMax-M2", "usage": { "prompt_tokens": 26, "completion_tokens": 159, "total_tokens": 185 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/magic.md # Magic - [magic/text-to-video](/api-references/video-models/magic/text-to-video.md) - [magic/image-to-video](/api-references/video-models/magic/image-to-video.md) - [magic/video-to-video](/api-references/video-models/magic/video-to-video.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/anthracite/magnum-v4.md # magnum-v4 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `anthracite-org/magnum-v4-72b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A LLM fine-tuned on top of Qwen2.5, specifically designed to replicate the prose quality of the Claude 3 models, particularly Sonnet and [Opus](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3-opus). It excels in generating coherent and contextually rich text. {% hint style="warning" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %} [How to make the first API call](https://docs.aimlapi.com/quickstart/setting-up)
How to make the first API call 1 **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["anthracite-org/magnum-v4-72b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."}},"required":["model","messages"],"title":"anthracite-org/magnum-v4-72b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"anthracite-org/magnum-v4-72b", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of YOUR_AIMLAPI_KEY 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'anthracite-org/magnum-v4-72b', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ] }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'gen-1744217980-rdVBcVTb76dllKCCRjak', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'refusal': None}}], 'created': 1744217980, 'model': 'anthracite-org/magnum-v4-72b', 'usage': {'prompt_tokens': 37, 'completion_tokens': 50, 'total_tokens': 87}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/integrations/make.md # Make ## About Make is a powerful, enterprise-scale automation platform. It offers flow control, data manipulation, HTTP/webhooks, AI agents and tools, notes, an MCP server, and many other features at your service. ## How to Use AIML API via Make You work with Make through the browser; there’s no need to install any components separately. 1. First, you need a [Make account](https://www.make.com/en/register). 2. Next, click on **Scenarios** in the left menu.\ \ ![](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-65b7badb7d09b2cd200ef52e199e242fffc05e62%2FUntitled.png?alt=media) 3. Choose **AI/ML API** option:\ \\
4. Choose **Get a Model Response**:\\
5. Click on **Create a Connection**, provide a name, and paste your [AIMLAPI key](https://aimlapi.com/app/keys):\\
6. Once that’s done, you’ll be able to configure the model and enter your prompt in the provided field:\\
7. Then just click **Run once**:\\
\ When the model returns a response, you’ll see it to the right of your node:\\
You can find more details in the [official documentation](https://help.make.com/create-your-first-scenario). ## Our Supported Models Any of our [text models](https://docs.aimlapi.com/api-references/text-models-llm#complete-text-model-list) can be used to process your requests — for instance, [gpt-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o). --- # Source: https://docs.aimlapi.com/integrations/manus.md # Manus [Manus](https://manus.im/docs/introduction/welcome) is a workflow and AI-agent orchestration platform that lets users integrate custom APIs, define automation logic, and run LLM-powered tools inside a unified interface. Manus supports custom model backends (such as AI/ML API), prompt templates, request routing, secure secret storage, and visual debugging. ## Installation / Setup ### 1. Prerequisites * **AI/ML API account** – sign up and create an API key at: * Dashboard: * API Keys: * **Manus account** – with access to **Settings → Integrations → Add custom API**.
*** ### 2. Open the *Add custom API* Form In Manus, go to: > **Settings → Integrations → Add custom API** This opens the configuration form.
*** ### 3. Fill Out the Configuration #### A) Name Enter: ```c AI/ML API ```
*** #### B) Icon (optional) Paste this URL into the icon field: {% code overflow="wrap" %} ```css https://raw.githubusercontent.com/OctavianTheI/aimlapi-assets-devrel/main/aimlapi%20square%20Logo%20Icon.svg ``` {% endcode %}
*** #### C) Base URL & Auth Header * **Base URL:** ```html https://api.aimlapi.com ``` * **Authorization header (Manus will use the secret defined below):** ```python Authorization: Bearer ${AIMLAPI_KEY} ```
*** #### D) Secrets Create a secret to store your AI/ML API key: * **Secret name:** `AIMLAPI_KEY` * **Value:** your personal AI/ML API key from the AI/ML API dashboard.
*** #### E) Note (Request Templates) Paste the following template into the **Note** field — Manus will use it as a reference for how to call AI/ML API endpoints: {% code overflow="wrap" %} ```json This custom API connects to https://api.aimlapi.com to access a variety of AI models. The API key is stored in the AIMLAPI_KEY secret and must be sent as: Authorization: Bearer ${AIMLAPI_KEY} Base URL: https://api.aimlapi.com ## 1. Chat / Text & Code POST /chat/completions { "model": "MODEL_NAME_HERE", "messages": [{"role":"user","content":"..."}] } ## 2. Text → Image POST /images/generations { "model": "MODEL_NAME_HERE", "prompt": "Describe the image" } ## 3. Audio → Text (STT) POST /audio/transcriptions [file upload: "file"] { "model": "openai/whisper-1" } ## 4. Text → Speech (TTS) POST /audio/speech { "model": "MODEL_NAME_HERE", "input": "Text to speak", "voice": "alloy" } ## 5. Image → Text (Vision) POST /chat/completions { "model": "MODEL_NAME_HERE", "messages":[{"role":"user","content":[ {"type":"text","text":"Question"}, {"type":"image_url","image_url":{"url":"https://..."}}]}] } ## 6. Embeddings POST /embeddings { "model": "MODEL_NAME_HERE", "input": "The text to embed" } ``` {% endcode %}
*** ### 4. Finalise the Integration Click **Add**. Manus will save the integration and show a confirmation message.
*** ## Usage Examples Once the custom API is added, you can select **AI/ML API** as a backend in Manus and use prompts such as: * **Chat / Text & Code** > “Write a poem about spring.” * **Text → Image** > “Draw a lonely lighthouse on a stormy coast using `openai/dall-e-3`.” * **Embeddings** > “Generate embeddings for this paragraph with `BAAI/bge-large-en-v1.5`.”
*** ## Tips * You can use either `https://api.aimlapi.com` or `https://api.aimlapi.com/v1` as the base URL. * All requests must include: ```python Authorization: Bearer ${AIMLAPI_KEY} ``` * AI/ML API supports a wide range of providers and models (chat, code, images, audio, vision, embeddings) with enterprise-grade rate limits and uptime. *** ## Helpful Links * Dashboard: [https://aimlapi.com/app/](https://aimlapi.com/app/?utm_source=manus\&utm_medium=github\&utm_campaign=integration) * API Keys: [https://aimlapi.com/app/keys/](https://aimlapi.com/app/keys/?utm_source=manus\&utm_medium=github\&utm_campaign=integration) * Models browser: [https://aimlapi.com/models/](https://aimlapi.com/models/?utm_source=manus\&utm_medium=github\&utm_campaign=integration) * Docs: [https://docs.aimlapi.com/](https://docs.aimlapi.com/?utm_source=manus\&utm_medium=github\&utm_campaign=integration) Enjoy building with Manus + AI/ML API 🚀 --- # Source: https://docs.aimlapi.com/integrations/marvin.md # Marvin ## About [Marvin](https://github.com/PrefectHQ/marvin) is a Python framework by PrefectHQ for building agentic AI workflows and producing structured outputs. It allows developers to define *Tasks* (objective-focused units of work) and assign them to specialized *Agents* (LLM-powered configurations). Marvin supports type-safe results via Pydantic models, integrates with multiple LLM providers through Pydantic AI, and enables orchestration of multi-agent threads for complex workflows. ## Installation *** ### 1) Install Marvin ```bash uv add marvin # or pip install marvin ``` *** ### 2) Set your environment variable macOS / Linux: ```bash export AIML_API_KEY=your-api-key ``` Windows PowerShell: ```powershell setx AIML_API_KEY "your-api-key" ``` *** ### 3) Example — Run an AI/ML API Agent **File:** [`examples/provider_specific/aimlapi/run_agent.py`](https://github.com/PrefectHQ/marvin/blob/main/examples/provider_specific/aimlapi/run_agent.py) {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from __future__ import annotations import os from pathlib import Path from pydantic_ai.models.openai import OpenAIModel from pydantic_ai.providers.openai import OpenAIProvider import marvin AIML_API_URL = "https://api.aimlapi.com/v1" def get_provider() -> OpenAIProvider: api_key = os.getenv("AIML_API_KEY") if not api_key: raise RuntimeError("Set AIML_API_KEY environment variable to your AI/ML API key.") return OpenAIProvider(api_key=api_key, base_url=AIML_API_URL) def write_file(path: str, content: str) -> None: """Write content to a file.""" Path(path).write_text(content) def main() -> None: writer = marvin.Agent( model=OpenAIModel("gpt-4o", provider=get_provider()), name="AI/ML Writer", instructions="Write concise, engaging content for developers", tools=[write_file], ) result = marvin.run( "how to use pydantic? write haiku to docs.md", agents=[writer], ) print(result) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %} Run it: ```bash AIML_API_KEY=your-api-key \ uv run examples/provider_specific/aimlapi/run_agent.py ``` *** ### 4) Other Examples More examples are available in the same directory: > [github.com/PrefectHQ/marvin/tree/main/examples/provider\_specific/aimlapi](https://github.com/PrefectHQ/marvin/tree/main/examples/provider_specific/aimlapi) * `structured_output.py` — structured JSON output * `tools_agent.py` — agent with custom tools (dates, weather) *** ## Tips * **Profiles:** use multiple configurations (default = [`openai/gpt-5-chat-latest`](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-chat), budget = [`openai/o4-mini`](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini)) * **Structured results:** pass `result_type=...` for typed outputs * **Tools:** register Python functions via `Agent(tools=[...])` * **Token limits:** increase output size if needed *** ## Troubleshooting | Issue | Solution | | ----------------------- | ----------------------------------------------- | | **401** Unauthorized | Check your API key and remove extra spaces | | **404** Model not found | Verify the model ID exists in your account | | Network error | Whitelist `api.aimlapi.com` if behind VPN/proxy | *** ## Helpful Links * Dashboard: * API Keys: * Models: * Docs: * Marvin repository: Enjoy coding with **Marvin + AI/ML API** 🚀 --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3-8b-instruct-lite.md # Llama-3-8B-Instruct-Lite

This documentation is valid for the following list of our models:

  • meta-llama/Meta-Llama-3-8B-Instruct-Lite
Try in Playground
## Model Overview A generative text model optimized for dialogue and instruction-following use cases. It leverages a refined transformer architecture to deliver high performance in text generation tasks. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/Meta-Llama-3-8B-Instruct-Lite"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/Meta-Llama-3-8B-Instruct-Lite"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/Meta-Llama-3-8B-Instruct-Lite", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/Meta-Llama-3-8B-Instruct-Lite', messages:[ { role:'user', // Insert your question for the model here, instead of Hello: content: 'Hello' } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "o95Ai5e-2j9zxn-976ad7df3ef49b19", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! It's nice to meet you. Is there something I can help you with, or would you like to chat?", "tool_calls": [] } } ], "created": 1756457871, "model": "meta-llama/Meta-Llama-3-8B-Instruct-Lite", "usage": { "prompt_tokens": 2, "completion_tokens": 5, "total_tokens": 7 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-405b-instruct-turbo.md # Llama-3.1-405B-Instruct-Turbo

This documentation is valid for the following list of our models:

  • meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo
Try in Playground
## Model Overview A state-of-the-art large language model developed by Meta AI, designed for advanced text generation tasks. It excels in generating coherent and contextually relevant text across various domains. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo", "messages":[ { "role":"user", # Insert your question for the model here, instead of Hello: "content":"Hello" } ] } ) data = response.json() print(data) ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npQhshu-3NKUce-92da9f512c0f70b9', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello. How can I assist you today?', 'tool_calls': []}}], 'created': 1744208187, 'model': 'meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo', 'usage': {'prompt_tokens': 265, 'completion_tokens': 81, 'total_tokens': 346}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-70b-instruct-turbo.md # Llama-3.1-70B-Instruct-Turbo

This documentation is valid for the following list of our models:

  • meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo
Try in Playground
## Model Overview A state-of-the-art instruction-tuned language model designed for multilingual dialogue use cases. It excels in natural language generation and understanding tasks, outperforming many existing models in the industry benchmarks. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npQi9tF-2j9zxn-92daa0a4ec4968f1', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello. How can I assist you today?', 'tool_calls': []}}], 'created': 1744208241, 'model': 'meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo', 'usage': {'prompt_tokens': 67, 'completion_tokens': 18, 'total_tokens': 85}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-8b-instruct-turbo.md # Llama-3.1-8B-Instruct-Turbo

This documentation is valid for the following list of our models:

  • meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo
Try in Playground
## Model Overview An advanced language model designed for high-quality text generation, optimized for professional and industry applications requiring extensive GPU resources. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npQnn39-66dFFu-92dab6aaa863ef3f', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello. How can I assist you today?', 'tool_calls': []}}], 'created': 1744209143, 'model': 'meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo', 'usage': {'prompt_tokens': 14, 'completion_tokens': 4, 'total_tokens': 18}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/moderation-safety-models/meta/meta-llama-guard-3-8b.md # Meta-Llama-Guard-3-8B {% hint style="info" %} This documentation is valid for the following list of our models: * `meta-llama/Meta-Llama-Guard-3-8B` {% endhint %} ## Model Overview A language model designed to provide input and output safeguards for human-AI conversations. It focuses on content moderation and safety, ensuring the responses generated by AI systems adhere to predefined safety standards. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## Submit a request ### API Schema {% openapi src="" path="/v1/chat/completions" method="post" %} [Meta-Llama-Guard-3-8B.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-f12dfc4d9092eb1919ca4beefbcb04259bcdf59a%2FMeta-Llama-Guard-3-8B.json?alt=media\&token=860d19d4-22ca-4e14-bac3-4398d680b063) {% endopenapi %} --- # Source: https://docs.aimlapi.com/api-references/moderation-safety-models/meta.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/meta.md # Meta - [Llama-3-8B-Instruct-Lite](/api-references/text-models-llm/meta/meta-llama-3-8b-instruct-lite.md) - [Llama-3.1-8B-Instruct-Turbo](/api-references/text-models-llm/meta/meta-llama-3.1-8b-instruct-turbo.md) - [Llama-3.1-70B-Instruct-Turbo](/api-references/text-models-llm/meta/meta-llama-3.1-70b-instruct-turbo.md) - [Llama-3.1-405B-Instruct-Turbo](/api-references/text-models-llm/meta/meta-llama-3.1-405b-instruct-turbo.md) - [Llama-3.2-3B-Instruct-Turbo](/api-references/text-models-llm/meta/llama-3.2-3b-instruct-turbo.md) - [Llama-3.3-70B-Instruct-Turbo](/api-references/text-models-llm/meta/llama-3.3-70b-instruct-turbo.md) - [Llama-3.3-70B-Versatile](/api-references/text-models-llm/meta/llama-3.3-70b-versatile.md) - [Llama-4-scout](/api-references/text-models-llm/meta/llama-4-scout.md) - [Llama-4-maverick](/api-references/text-models-llm/meta/llama-4-maverick.md) --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/microsoft.md # Microsoft - [vibevoice-1.5b](/api-references/speech-models/text-to-speech/microsoft/vibevoice-1.5b.md) - [vibevoice-7b](/api-references/speech-models/text-to-speech/microsoft/vibevoice-7b.md) --- # Source: https://docs.aimlapi.com/api-references/speech-models/voice-chat/minimax.md # Source: https://docs.aimlapi.com/api-references/music-models/minimax.md # Source: https://docs.aimlapi.com/api-references/video-models/minimax.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/minimax.md # MiniMax - [text-01](/api-references/text-models-llm/minimax/text-01.md) - [m1](/api-references/text-models-llm/minimax/m1.md) - [m2](/api-references/text-models-llm/minimax/m2.md) - [m2-1](/api-references/text-models-llm/minimax/m2-1.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-7b-instruct.md # Mistral-7B-Instruct

This documentation is valid for the following list of our models:

  • mistralai/Mistral-7B-Instruct-v0.2
  • mistralai/Mistral-7B-Instruct-v0.3
Try in Playground
## Model Overview An advanced version of the Mistral-7B model, fine-tuned specifically for instruction-based tasks. This model is designed to enhance language generation and understanding capabilities. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["mistralai/Mistral-7B-Instruct-v0.2","mistralai/Mistral-7B-Instruct-v0.3"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"mistralai/Mistral-7B-Instruct-v0.3"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"mistralai/Mistral-7B-Instruct-v0.3", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'mistralai/Mistral-7B-Instruct-v0.3', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npPQHux-3NKUce-92d937464c2aff02', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': " Hello! How can I help you today? Is there something specific you'd like to talk about or learn more about? I'm here to answer questions and provide information on a wide range of topics. Let me know if you have any questions or if there's something you'd like to discuss.", 'tool_calls': []}}], 'created': 1744193439, 'model': 'mistralai/Mistral-7B-Instruct-v0.3', 'usage': {'prompt_tokens': 2, 'completion_tokens': 27, 'total_tokens': 29}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition/mistral-ai.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai.md # Mistral AI - [mistral-nemo](/api-references/text-models-llm/mistral-ai/mistral-nemo.md) - [mistral-tiny](/api-references/text-models-llm/mistral-ai/mistral-tiny.md) - [Mistral-7B-Instruct](/api-references/text-models-llm/mistral-ai/mistral-7b-instruct.md) - [Mixtral-8x7B-Instruct](/api-references/text-models-llm/mistral-ai/mixtral-8x7b-instruct-v0.1.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-nemo.md # mistral-nemo

This documentation is valid for the following list of our models:

  • mistralai/mistral-nemo
Try in Playground
## Model Overview A state-of-the-art large language model designed for advanced natural language processing tasks, including text generation, summarization, translation, and sentiment analysis. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["mistralai/mistral-nemo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"mistralai/mistral-nemo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"mistralai/mistral-nemo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'mistralai/mistral-nemo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'gen-1744193377-PR9oTu6vDabN9nj0VUUX', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today? Let me know if you have any questions or just want to chat. 😊', 'refusal': None}}], 'created': 1744193377, 'model': 'mistralai/mistral-nemo', 'usage': {'prompt_tokens': 0, 'completion_tokens': 5, 'total_tokens': 5}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition/mistral-ai/mistral-ocr-latest.md # mistral-ocr-latest {% hint style="info" %} This documentation is valid for the following list of our models: * `mistral/mistral-ocr-latest` {% endhint %} ## Model Overview This Optical Character Recognition API from Mistral sets a new standard in document understanding. Unlike other models, Mistral OCR comprehends each element of documents—media, text, tables, equations—with unprecedented accuracy and cognition. It takes images and PDFs as input and extracts content in an ordered interleaved text and images. Maximum file size: `50` MB.\ Maximum number of pages: `1000`. {% hint style="warning" %} Note that this OCR does not preserve character formatting: bold, underline, italics, monospace text, etc.\ However, it preserves footnotes (superscript text). {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions * Copy the code from one of the [examples](#example-1-process-a-pdf-file) below, depending on whether you want to process an image or a PDF. * Replace `` with your AIML API key from [your personal account](https://aimlapi.com/app/keys). * Replace the URL of the document or image with the one you need. * If you need to use different parameters, refer to the API schema below for valid values and operational logic. * Save the modified code as a Python file and run it in an IDE[^1] or via the console.
## API Schema ## Extract text from images using OCR. > Performs optical character recognition (OCR) to extract text from images, enabling text-based analysis, data extraction, and automation workflows from visual data. ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Vision.v1.OCRResponseDTO":{"type":"object","properties":{"pages":{"type":"array","items":{"type":"object","properties":{"index":{"type":"integer","description":"The page index in a PDF document starting from 0"},"markdown":{"type":"string","description":"The markdown string response of the page"},"images":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"Image ID for extracted image in a page"},"top_left_x":{"type":"integer","nullable":true,"description":"X coordinate of top-left corner of the extracted image"},"top_left_y":{"type":"integer","nullable":true,"description":"Y coordinate of top-left corner of the extracted image"},"bottom_right_x":{"type":"integer","nullable":true,"description":"X coordinate of bottom-right corner of the extracted image"},"bottom_right_y":{"type":"integer","nullable":true,"description":"Y coordinate of bottom-right corner of the extracted image"},"image_base64":{"type":"string","nullable":true,"format":"uri","description":"Base64 string of the extracted image"}},"required":["id","top_left_x","top_left_y","bottom_right_x","bottom_right_y"]},"description":"List of all extracted images in the page"},"dimensions":{"type":"object","nullable":true,"properties":{"dpi":{"type":"integer","description":"Dots per inch of the page-image."},"height":{"type":"integer","description":"Height of the image in pixels."},"width":{"type":"integer","description":"Width of the image in pixels."}},"required":["dpi","height","width"],"description":"The dimensions of the PDF page's screenshot image"}},"required":["index","markdown","images","dimensions"]},"description":"List of OCR info for pages"},"model":{"type":"string","enum":["mistral-ocr-latest"],"description":"The model used to generate the OCR."},"usage_info":{"type":"object","properties":{"pages_processed":{"type":"integer","description":"Number of pages processed"},"doc_size_bytes":{"type":"integer","nullable":true,"description":"Document size in bytes"}},"required":["pages_processed","doc_size_bytes"],"description":"Usage info for the OCR request."}},"required":["pages","model","usage_info"]}}},"paths":{"/v1/ocr":{"post":{"operationId":"DocumentModelsController_processOCRRequest_v1","summary":"Extract text from images using OCR.","description":"Performs optical character recognition (OCR) to extract text from images, enabling text-based analysis, data extraction, and automation workflows from visual data.","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["mistral/mistral-ocr-latest"]},"document":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["document_url"],"description":"Type of document."},"document_url":{"type":"string","format":"uri","description":"Document URL."}},"required":["type","document_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"],"description":"Image URL."},"image_url":{"type":"string","format":"uri","description":"Type of document."}},"required":["type","image_url"]}],"description":"Document to run OCR"},"pages":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"integer"}},{"nullable":true}],"description":"Specific pages you wants to process"},"include_image_base64":{"type":"boolean","nullable":true,"description":"Include base64 images in response"},"image_limit":{"type":"integer","nullable":true,"description":"Max images to extract"},"image_min_size":{"type":"integer","nullable":true,"description":"Minimum height and width of image to extract"}},"required":["document"]}}}},"responses":{"201":{"description":"Successfully processed document with OCR","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Vision.v1.OCRResponseDTO"}}}}},"tags":["Vision Models"]}}}} ``` ## Example #1: Text Recognition From an Image We’ve found a photo of a short handwritten text for OCR testing and will be passing it to the model via URL:

Thanks, Reddit!

{% code overflow="wrap" %} ```python import requests def main(): response = requests.post( "https://api.aimlapi.com/v1/ocr", headers={ "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "document": { "type": "image_url", "image_url": "https://i.redd.it/hx0v4fj979k51.jpg" }, "model": "mistral/mistral-ocr-latest", }, ) # response.raise_for_status() data = response.json() # print(data) return data main() ``` {% endcode %}
Response {% code overflow="wrap" %} ```json5 {'pages': [{'index': 0, 'markdown': 'This is a handwriting test to see how it looks on lined paper. For the past two weeks I have been trying to improve my writing along with learning hows to write with maintain pens. If you have any suggestions, tips or free resources I would love to check it out. Hope everyone is having a good day.', 'images': [], 'dimensions': {'dpi': 200, 'height': 2789, 'width': 3024}}], 'model': 'mistral-ocr-2503-completion', 'usage_info': {'pages_processed': 1, 'doc_size_bytes': 573156}} ``` {% endcode %}
## Example #2: Process a PDF File Let's process a PDF file from the internet using the described model: {% code overflow="wrap" %} ```python import requests def main(): response = requests.post( "https://api.aimlapi.com/v1/ocr", headers={ "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "document": { "type": "document_url", "document_url": "https://css4.pub/2015/textbook/somatosensory.pdf" }, "model": "mistral/mistral-ocr-latest", }, ) response.raise_for_status() data = response.json() print(data) if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ```json5 {'pages': [{'index': 0, 'markdown': "# Anatomy of the Somatosensory System \n\nFrom Wiкibooks ${ }^{1}$\n\nOur somatosensory system consists of sensors in the skin and sensors in our muscles, tendons, and joints. The receptors in the skin, the so called cutaneous receptors, tell us about temperature (thermoreceptors), pressure and surface texture (mechano receptors), and pain (nociceptors). The receptors in muscles and joints provide information about muscle length, muscle tension, and joint angles.\n\n## Cutaneous receptors\n\nSensory information from Meissner corpuscles and rapidly adapting afferents leads to adjustment of grip force when objects are lifted. These afferents respond with a brief burst of action potentials when objects move a small distance during the early stages of lifting. In response to\n![img-0.jpeg](img-0.jpeg)\n\nThis is a sample document to showcase page-based formatting. It contains a chapter from a Wikibook called Sensory Systems. None of the content has been changed in this article, but some content has been removed.\n\nFigure 1: Receptors in the human skin: Mechanoreceptors can be free receptors or encapsulated. Examples for free receptors are the hair receptors at the roots of hairs. Encapsulated receptors are the Pacinian corpuscles and the receptors in the glabrous (hairless) skin: Meissner corpuscles, Ruffini corpuscles and Merkel's disks.\n\n[^0]\n[^0]: ${ }^{1}$ The following description is based on lecture notes from Laszlo Zaborszky, from Rutgers University.", 'images': [{'id': 'img-0.jpeg', 'top_left_x': 155, 'top_left_y': 1073, 'bottom_right_x': 937, 'bottom_right_y': 1694, 'image_base64': None}], 'dimensions': {'dpi': 200, 'height': 1970, 'width': 1575}}, {'index': 1, 'markdown': "Figure 2: Mammalian muscle spindle showing typical position in a muscle (left), neuronal connections in spinal cord (middle) and expanded schematic (right). The spindle is a stretch receptor with its own motor supply consisting of several intrafusal muscle fibres. The sensory endings of a primary (group Ia) afferent and a secondary (group II) afferent coil around the non-contractile central portions of the intrafusal fibres.\n![img-1.jpeg](img-1.jpeg)\nrapidly adapting afferent activity, muscle force increases reflexively until the gripped object no longer moves. Such a rapid response to a tactile stimulus is a clear indication of the role played by somatosensory neurons in motor activity.\n\nThe slowly adapting Merkel's receptors are responsible for form and texture perception. As would be expected for receptors mediating form perception, Merkel's receptors are present at high density in the digits and around the mouth ( $50 / \\mathrm{mm}^{2}$ of skin surface), at lower density in other glabrous surfaces, and at very low density in hairy skin. This innervations density shrinks progressively with the passage of time so that by the age of 50 , the density in human digits is reduced to $10 / \\mathrm{mm}^{2}$. Unlike rapidly adapting axons, slowly adapting fibers respond not only to the initial indentation of skin, but also to sustained indentation up to several seconds in duration.\n\nActivation of the rapidly adapting Pacinian corpuscles gives a feeling of vibration, while the slowly adapting Ruffini corpuscles respond to the lataral movement or stretching of skin.\n\n## Nociceptors\n\nNociceptors have free nerve endings. Functionally, skin nociceptors are either high-threshold mechanoreceptors", 'images': [{'id': 'img-1.jpeg', 'top_left_x': 606, 'top_left_y': 228, 'bottom_right_x': 1431, 'bottom_right_y': 705, 'image_base64': None}], 'dimensions': {'dpi': 200, 'height': 1970, 'width': 1575}}, {'index': 2, 'markdown': '| | Rapidly adapting | Slowly adapting |\n| :-- | :-- | :-- |\n| Surface receptor /
small receptive
field | Hair receptor, Meissner\'s corpuscle: De-
tect an insect or a very fine vibration.
Used for recognizing texture. | Merkel\'s receptor: Used for spa-
tial details, e.g. a round surface
edge or "an X" in brail. |\n| Deep receptor /
large receptive
field | Pacinian corpuscle: "A diffuse vibra-
tion" e.g. tapping with a pencil. | Ruffini\'s corpuscle: "A skin
stretch". Used for joint position
in fingers. |\n\nTable 1\nor polymodal receptors. Polymodal receptors respond not only to intense mechanical stimuli, but also to heat and to noxious chemicals. These receptors respond to minute punctures of the epithelium, with a response magnitude that depends on the degree of tissue deformation. They also respond to temperatures in the range of $40-60^{\\circ} \\mathrm{C}$, and change their response rates as a linear function of warming (in contrast with the saturating responses displayed by non-noxious thermoreceptors at high temperatures).\n\nPain signals can be separated into individual components, corresponding to different types of nerve fibers used for transmitting these signals. The rapidly transmitted signal, which often has high spatial resolution, is called first pain or cutaneous pricking pain. It is well localized and easily tolerated. The much slower, highly affective component is called second pain or burning pain; it is poorly localized and poorly tolerated. The third or deep pain, arising from viscera, musculature and joints, is also poorly localized, can be chronic and is often associated with referred pain.\n\n## Muscle Spindles\n\nScattered throughout virtually every striated muscle in the body are long, thin, stretch receptors called muscle spindles. They are quite simple in principle, consisting of a few small muscle fibers with a capsule surrounding the middle third of the fibers. These fibers are called intrafusal fibers, in contrast to the ordinary extrafusal fibers. The ends of the intrafusal fibers are attached to extrafusal fibers, so whenever the muscle is stretched, the intrafusal fibers are also\n\nNotice how figure captions and sidenotes are shown in the outside margin (on the left or right, depending on whether the page is left or right). Also, figures are floated to the top/ bottom of the page. Wide content, like the table and Figure 3, intrude into the outside margins.', 'images': [], 'dimensions': {'dpi': 200, 'height': 1970, 'width': 1575}}, {'index': 3, 'markdown': '![img-2.jpeg](img-2.jpeg)\n\nFigure 3: Feedback loops for proprioceptive signals for the perception and control of limb movements. Arrows indicate excitatory connections; filled circles inhibitory connections.\n\nFor more examples of how to use HTML and CSS for paper-based publishing, see css4.pub.\nstretched. The central region of each intrafusal fiber has few myofilaments and is non-contractile, but it does have one or more sensory endings applied to it. When the muscle is stretched, the central part of the intrafusal fiber is stretched and each sensory ending fires impulses.\n\nMuscle spindles also receive a motor innervation. The large motor neurons that supply extrafusal muscle fibers are called alpha motor neurons, while the smaller ones supplying the contractile portions of intrafusal fibers are called gamma neurons. Gamma motor neurons can regulate the sensitivity of the muscle spindle so that this sensitivity can be maintained at any given muscle length.\n\n## Joint receptors\n\nThe joint receptors are low-threshold mechanoreceptors and have been divided into four groups. They signal different characteristics of joint function (position, movements, direction and speed of movements). The free receptors or type 4 joint receptors are nociceptors.', 'images': [{'id': 'img-2.jpeg', 'top_left_x': 155, 'top_left_y': 226, 'bottom_right_x': 1307, 'bottom_right_y': 843, 'image_base64': None}], 'dimensions': {'dpi': 200, 'height': 1970, 'width': 1575}}], 'model': 'mistral-ocr-2503-completion', 'usage_info': {'pages_processed': 4, 'doc_size_bytes': 145349}} ``` {% endcode %}
## Example #3: Process a PDF File And Parse the Response As you can see above, the model returns markdown containing the recognized text with formatting elements preserved (headings, italics, bold text, etc.), along with the location of images within the text and the images themselves in base64 format, if you have enabled the corresponding option `include_image_base64`. However, the markdown is returned as a string with newline characters and other string attributes, so you might need to parse the output separately to get clean markdown containing only the formatted text and images. In this example, we’ve written code that make it for us.
Step-by-step example explanation * **Send OCR request**\ The `ocr_process()` function sends a POST request to the AIML API with the URL of a PDF document. It asks for OCR results including embedded base64 images. * **Receive structured OCR output**\ The API returns a JSON response containing extracted Markdown text and optional base64-encoded images for each page. * **Create output directory**\ The script creates an `output_images/` folder to store images extracted from the base64 data. * **Replace image placeholders**\ For each Markdown block, the script finds image placeholders like `![img-0.jpeg](img-0.jpeg)` and replaces them with local links to newly saved images. * **Detect image format**\ The script checks the base64 image header (`data:image/png;base64`, etc.) to determine whether to save the image as `.png` or `.jpg`. * **Decode and save images**\ The base64 image is decoded and saved to a file in the `output_images/` folder. * **Combine Markdown**\ All Markdown blocks from all pages are joined into a single `.md` file (`output.md`), separated by horizontal rules. * **Done**\ The final Markdown file includes properly linked images and is ready for use or preview.
{% code overflow="wrap" %} ```python import os import re import base64 import requests def ocr_process(): response = requests.post( "https://api.aimlapi.com/v1/ocr", headers={ "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "document": { "type": "document_url", "document_url": "https://zovi0.github.io/public_misc/test-PDF-2.pdf" }, "model": "mistral/mistral-ocr-latest", "include_image_base64": True, "image_limit": 5 }, ) data = response.json() print(data) return data def parse_ocr_output(ocr_output): output_dir = "output_images" os.makedirs(output_dir, exist_ok=True) all_markdown = [] for page in ocr_output.get("pages", []): md = page["markdown"] images = {img["id"]: img["image_base64"] for img in page.get("images", []) if img.get("image_base64")} def replace_image(match): image_id = match.group(1) base64_data = images.get(image_id) if not base64_data: return match.group(0) # Leave original markdown if no image data # Detect image format img_match = re.match(r"data:image/(png|jpeg|jpg);base64,(.*)", base64_data) if not img_match: return match.group(0) img_format, img_b64 = img_match.groups() ext = "jpg" if img_format in ["jpeg", "jpg"] else "png" filename = f"{image_id}.{ext}" filepath = os.path.join(output_dir, filename) with open(filepath, "wb") as f: f.write(base64.b64decode(img_b64)) return f"![{filename}]({filepath})" # Replace image links in markdown with local image links md = re.sub(r"!\[.*?\]\((img-\d+\.\w+)\)", replace_image, md) all_markdown.append(md) # Combine pages with spacing final_md = "\n\n---\n\n".join(all_markdown) with open("output.md", "w", encoding="utf-8") as f: f.write(final_md) print("Markdown and images saved.") return final_md if __name__ == "__main__": ocr_output = ocr_process() parse_ocr_output(ocr_output) ``` {% endcode %}
Response before parsing {% code overflow="wrap" %} ```json5 {'pages': [{'index': 0, 'markdown': '![img-0.jpeg](img-0.jpeg)\n\n# Characteristics of plant cells \n\nPlant cells have cell walls composed of cellulose, hemicelluloses, and pectin and constructed outside the cell membrane. Their composition contrasts with the cell walls of fungi, which are made of chitin, of bacteria, which are made of peptidoglycan and of archaea, which are made of pseudopeptidoglycan. In many cases lignin or suberin are secreted by the protoplast as secondary wall layers inside the primary cell wall. Cutin is secreted outside the primary cell wall and into the outer layers of the secondary cell wall of the epidermal cells of leaves, stems and other above-ground organs to form the plant cuticle. Cell walls perform many essential functions. They provide shape to form the tissue and organs of the plant, and play an important role in intercellular communication and plant-microbe interactions. ${ }^{[1]}$ The cell wall is flexible during growth and has small pores called plasmodesmata that allow the exchange of nutrients and hormones between cells. ${ }^{[2]}$', 'images': [{'id': 'img-0.jpeg', 'top_left_x': 198, 'top_left_y': 142, 'bottom_right_x': 405, 'bottom_right_y': 350, 'image_base64': 'data:image/jpeg;base64,/9j/4AAQSkZJRgABAQAAAQABAAD/2wBDAAgGBgcGBQgHBwcJCQgKDBQNDAsLDBkSEw8UHRofHh0aHBwgJC4nICIsIxwcKDcpLDAxNDQ0Hyc5PTgyPC4zNDL/2wBDAQkJCQwLDBgNDRgyIRwhMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjIyMjL/wAARCADQAM8DASIAAhEBAxEB/8QAHwAAAQUBAQEBAQEAAAAAAAAAAAECAwQFBgcICQoL/8QAtRAAAgEDAwIEAwUFBAQAAAF9AQIDAAQRBRIhMUEGE1FhByJxFDKBkaEII0KxwRVS0fAkM2JyggkKFhcYGRolJicoKSo0NTY3ODk6Q0RFRkdISUpTVFVWV1hZWmNkZWZnaGlqc3R1dnd4eXqDhIWGh4iJipKTlJWWl5iZmqKjpKWmp6ipqrKztLW2t7i5usLDxMXGx8jJytLT1NXW19jZ2uHi4+Tl5ufo6erx8vP09fb3+Pn6/8QAHwEAAwEBAQEBAQEBAQAAAAAAAAECAwQFBgcICQoL/8QAtREAAgECBAQDBAcFBAQAAQJ3AAECAxEEBSExBhJBUQdhcRMiMoEIFEKRobHBCSMzUvAVYnLRChYkNOEl8RcYGRomJygpKjU2Nzg5OkNERUZHSElKU1RVVldYWVpjZGVmZ2hpanN0dXZ3eHl6goOEhYaHiImKkpOUlZaXmJmaoqOkpaanqKmqsrO0tba3uLm6wsPExcbHyMnK0tPU1dbX2Nna4uPk5ebn6Onq8vP09fb3+Pn6/9oADAMBAAIRAxEAPwD3+iiigAooooAKKKM0AFFFFABRRRQAUUUUAFFGaKACkpc1HLNHDG0ksixxr1ZjgD8TQA+lqnbapYXhItb23nI6+VKrY/I1ZDj1pXQrofRSZpRTGFFFFABRRRQAUUUUAFFFFACUtJRQAUUUhOKAFoppbBo3UWHYdRSA5pM0WFYdRmm7xnFQy3UMQzJIqD/aOP50m0hNpE/emk+prn7/AMYaXY7l84O6jO1e9cddfELUZZi9qkccXpIuazdaC6mbrQien+YAMDjPtXM+LddtbXS5rcOrzSjZhSDXC3vi7V9Sg8ozFF9YVKk/jWKYZpDltzse5OT+dc1TFRtZHNUxStZFnS9Vm0u6LxNtJUrgdGrqvBms6hdasInYyQsCZCf4a4R47gXYLxFYlGV7nNb3hnWzo+pLIRmGX5ZsdfbFY0qmphSq66nso6L6VKOlU7O8hvIElikV0buP88VaDdOK9NNNHqJprQdRSBs0tMYUUUUAFFFFABRRRQAhooooAac5rmPEXiyLQ7uOF4i25dxPpV3xDrsWh2huHVmf+EDvXlmva5LrlytzJEsbImAM9eaxrVeRGFWryI9c0/VbfUrZJ4HVw3bPIq8XVT83A9Sa8R0XxA2jX5li2yELjyyxxmtLWfE+qavCieX9nUcko3UVjHFJLUxWLSjqeryXUMalmkUKO+RXL6z43trOFlsys83b0rzZftEpJ82XHfLHmpDaLnJc5rKeMb2MZ45vSJsX3jjWLtVEciwNj/lmaxbu/vNReNrmeScqOd9WbewfbuVGbJ64qf7FcA4a2fnvtrkeJnJ2RjKtUZlR2zOck4X3qyttGO4YVs22ksw/esVz6ipl0aIPzMSPTFZz9otZImMJT6mKke0Eqp/KniMsO+Qc9K3LiSKyh+VAy9CcdKqzTKWjdNgzwyfxVndNmv1ZozWTBzICKqXUCou9BySAR6D1rqmiW7gIddv0FZV5Ym3O5csmOuKqEmpGc6cosg0HX7nRL0L5haCRxujP8ZPevY4JUmhR1IwQDwc4rxCW2VgWztbsK1vD3iubRC0UyCSEuBuLEla9LD17aM6cNX6SPX1x0p1Z+naraalE0lrMsmAN23tmrwJ9eK74tNXPQTurjqKQHIpaYwooooAKKKKAEoPSij1oAz9SsYb61aKZA+RxkdM14tqdidO1GeDfny2xmvb7tmS0lcdQhI/KvCru4kvJ5ZJT87tzXFijgxe5Ut4A2q7gR8wz0rbGOepOMYplraIbhOOTgZrVK29o8RfkscdK8uUzhVNyehDHp9w4VvlVSOnerUejqGG9yaXfO6rOkq7CxXbitKM/u+etRDmeiOtYeCjzMFjjgUIuaczE/e596U9ietNr3sHgIU4e1nuc8p+0dmAGe/FKDzwo/Gko967Z0YYmF2rGKlySsUbm1efcPOAiBztC96dFpzT3SSShVkAx8vepp54oIiZHChjwBViGYx7WX5gRkGvksZRlBuNN6nrUmpWbLZsTsOHwoFUWUMrRkZ7Vblv2aM8ADFZMt0NqSbSckjg+lcuFVeL/AHjNa3LJOxnXWnSwB5BzGOT7VlTW5lZvLwM+3Wuje5E0ESshxPx1rMvLX7M6LvGCDXaptO6PLqQcdUT+EtcfSbwW6pxcSBTj8q9cDZ6V4eIjb3MdxF8rRtvAY5rptG8dXpvhFfqsiOcLsXbt+tenh8QrWZ1YeurWZ6avANL0qGORXQEHIPTFS+x/CvQTvqd++o6iiigAooooAQ0lOPSm96AM3V7tLTTp5JCANpHP0rxaCNZpnLDI6g11XxCv7n+2RZhysAiDHH41zVlgrkcLnNeXi5NuyPLxcruxfs0L3MQHY9K1pbCGRd8jsMN68Cs/THWK4aVuiKSa2ZoEaP8AfxN5MhX5mOBz6VxKN/kGFhpdjIrWONNqkkA5HOanPtSCNYh5aqQE446Ute1gMD9uRlXq2lYDRRRXqypXkmc3tEFBIAz2oAyahkZt20VGJqRowa7glzyuIYUuFZZEyoOcmpldVA27cAeuMVBNMI41KuFJbaaxN0s/zks5dsEg9B6187Ci6knKR7FGm5R0RtNe2u7YZRk9ajEFvcRqI5UAVick1l2kNutnLCziUgtyTgipLeOFI1Q7dxXmreCjvqbKhL+U0Us9yRhJEKx8DA6U+e1E7qsiqYguCQOay3SaJdkVwypnJxWnbXCvGMyZJIxk9q5p0OT4UZzo2VrmXqNktuVRC5VhwGPSsp7cIhePIfuRxXYzxrJG4ZQeODXOTwSRFty8ZrGMnB6nnVIOm7o3/B/io27NaajcsyEfKznJ/OvRY5VeNCrBh2I714ZdrlUKRnOeorX8O+Ib+LVLOGW+cWauFZTjaBXp4fFJ6M6KOL+yz2DfzS7+ahjkSaMSKwZSMqQetDyqg3F1BHXJrvunsehzKxYooopjENIeKWmscA0MDyj4gkf8JIPTyR/WsGzX902DjJq/4pvWvfEFwxA/dv5Q+gNVwgVdnr3FeNiZNz0PGxLvPQu2Vv8AaBLGTsDLtz6muhu5PtenxWhj+4yNuz2WqFvZxi2iDE/eDVcXqQMnt0rbA4V1pc3Qr2rpwsOc5diDxnI/OmcnIHY07vtUdOpNJnCt619JG0VZHFK8tStdXf2d1VV3HHOada3JuFbKAY96dJbxzFS4PFEcMduDt71UpJRuZKLbJC20E+3Sq8sqW8Zlf04FWAAWDMQorGvZfMlkcsAqHCg14deq8RPl7HsYLDq6T6lZ7yIzebgZ+9gk1Q+2O85k2bQfubT0FNdzcTOfMUrnjCgcU0DDArgKo71d+RWR9zg8rhTipSIZrN5yX851JPQcUWsEttIWMznb0Dd6mVwy7lkB57VLu3K2SDnjmleXU9R4enKNktCzBqRnBt7lBGW+6VOc1LcQOjKQwAC7Fwe55yaygrcqBwv8VWoLmSVT5zjCnCgjtTpxUnaR4WOyunD3oG/pl6ZiYGA8xFwRnr7irU0AuEIYjYOpxzWDav8A6UqhysoYHcB1FdEW5CZUFifxxXmY2moTskfMVoK+pz97ALWUIGLZ5ArLntgCJBwV5wOldNOq36TPAgMMbFNzdc+lYbpvDIGIz6iueElFnl1YqMro1tN8c3Wm2EVqLVJAmcuz4I9K5+bUru6upZmmcs7E7VcniqSxyI7bpMjpg1q2MepWcyTW9k5YA43QbgQa7IVJyasb06s3ax7nRRRXqnqjW6+1Z2s6jDpunS3EpwFXitFh196474gXcUeiC2bJaVxt/A81nUdokVHaLPNrqc3V/JNj/WPk1pWtvJcSYTqOazbVHYliMc5rotIUh2lHTGK8WUryPGj707mmm5UUEcquKngOOo7VDgnPvVxBtQDHavosvhyUn5irauxVkIZ+tJStgydKSu21kYeQVG/zHbUlIAN2SQDXJmMnCjdGlGN5mbq8jqsMSHG49fpWddWryXJkVtyKpJH17VpanbPO0W08KdxNZssz2cNy6IZtxLKq+9eRhV7vN1PoKCtKLRj2yYCZbgVTv9UWNZLdF5bvVy1BFuPNBRwCSD/KsbUIJI7gsUOG5BxXTFt7n6HSaqQjcbpVwYrsCR8RkEde/aug2klAx571zdpbtNdxqAcAg11CkEZYZIPSqZUnHZIVmEZ2r3qGE7b1geR5RNSMRkt0pLSNZZnctlgPLwPU8iqo/Fc4sdJKgzQt3CyRuB/GFH51vugkmjZl+aPkfj1rn4QNyQhgriUNluhxXTO29WdMfQVw5jK9S6PhK8iuxS2Vl3okDEtgjktiueJC5JBVT0JrT1lR5MBEWTu5zWNMuYHGc8cD0rzm3Jnj4iWtiG1g+3alDBnHmSBa9usrcQW0cPZEC/kK8X0BC2vWOAS6yrkCvcE4Ar1cJG0dTuwcfd1JKKKQ13HaNbO7INeY/EK+SXUorIDDQDcT/vYr0uQkH2715F41mSfxFM8bq/yoMpz0rnxErRObEu0bIy7XmHJNb+lFRbsSSPmrBtBujA6MeiiteTVrHRoUhuiyu4yPlzzXjKEpPQ4sHRlWk1E2kOcYNXecY9qzbaaO5iEsTZXHToa0I23KDX1OH1ppLoZVITjUakiq/D0lTSxZOQMVFsYDJrpvc5ne+w2mP1Bp/WjtissRTVWHKVCXLK4yXbPE6KcFhgCuZuLS4S78t5NiIAojHUgf0rpHDKHCABipwTWQu9ogtyGW4dtocj5cfWvFowdOfJM9zCTU476mXf2bXJzaurEHPy9hUDF4iUmQ7iOCw4NbTRsqBbdF/d/K5HGe9QbhLAUkQFgOCT1NddtbI9/DZnOk7PYzFUJhtqg/SlL7WPzDA9Kui2a5ljDpsAHI6U77ItoSix/PjOX6H6UOCtqen/bMEtjNQmeTYmST7VoiMRwosBVZUYFie3uat+dF5AlSNYmJ2gMMc1DFYoZTdktKM/Mid6zUow3PGxeaSr+6hsi/bLiDaNyqcOF6n3FdJEqR4VSQEXoaxLWD7LfvPONkJXgd844rYWdDGJGx0BxnrXn4urHmSPHxEk2oroZGqsxuAoc7WGfoayLp1ICIeVPNXbyZXnkkzwDx9KyMPPP+7TLyMAF9a5qUbyseP8dQ9M8DaVbDSIb7yw08mfnI6YJrslPQe1YfhK1mtfDltDOmx1ByPxNbgAzXuUo2iezRjyxsSUlLSVoaHl/jLxDqEWvXmnxSbYE2YCnB5QHr+NcVPMIwZJnwDyT3rofGw/4rC+65Bj/9FrWA9uxUGWNip9VIry6jm5SR5lRzdRouWE0ckSSI2FQjnFTa7DDqeoxCSZAiQ5ywxzmq1nsiTYrAAnoauGG1uVZbuFnCnjy2wfxrOhU5J6m+U4qNCveQ7w3eQmaW3LsFB+Tdxn8e9dXHIVOT+Vc5DokLSWphnASNt3lk5xW+Tyf8K9Onj43sb5wqWJkqtFlwOJAMUrrlMYqorMvIp3nOeK9JVIyjzRZ4zk9IsYRgkUlByTRXPKpzvlSLVOEPeEOAeR9Ka6q0YjcZX+7jNK3I+lO3EKCDn6V5OZTcasV1NsO1F3tuc5cSS21y0aRsIhyp96hnmKebIUOw4aLaue3P610rrG6lZhhc7h65qvFYxRQ7FdgueATzVxx0EvM9fnstEY62t5hJrc5LAFt/bvTZ4rmObzCzMwwSuM4z1NdFDEtvHsBJOc/eqUwSTsscJWOVj8rsMhT7461k8xb2iQ6skY72zXcS+VHlVJG48YOOuKu2Nu1rZrG4APcjvVz7K9qFSeeOZ2PLRcKPYg96rz3MdsvzkA56H0rCrUlVVluQpTnLliQTSGVzEqxtGAdxYjOaoSxSJZsQx24Aya0JLJJ3SQEEYJz7VmXl29wRGBhIzt+tcjUnpLdHFVnKHNzbmZdn90g9TyfWrPhkBvEun7sZ80DB71TvGzKkYXAHOa6bwNpIuNTjuZ43Cxr5kbHpmu3CwuznwsLu56ioGMdPanDGelApa9hHsLQWjFFFAzLudA0y7vWu57KKSc4JkI54GB/KpbnSrO7txDPAjoOikdKvUhNTyolxR494o0GTS9QdoosW+NwKjge1ZNvcDftdwpHb1r2m9sY722eGVQVbocV5T4n8OHQrhPLlkeKUZDFRw2elediMPrdHm1sPZuUSK1uPs8/mKxJXk49K6GGeK5i3IzFupz1FchFdliivjHTitG1vDaMzKMjuK82ULGFGfJ7p0Z4HHSja5wR0NQW94lxCGGPpUrS+WgfdhScH2renWrUtmdboxqajwjYJxSY6561TN87s8cZ+ZT8n+0KfZXLXUZbGGRsHNbvM6z93Qj2NPYsMoIwehqJvkBjVuT3qXIJOKjdXD7goxXDWqSlq9zsw8VzJJbFVi7fLJu/3hT1dNu15lcjoO9PEyuxV4zj1FKv2c8ALj0rz/eTue5KK5LWGCaJpd2AAB0qS3upPtG+J2TaOCKgkQNJsRQPepoSY9ysoUAfe9a0pud9zDEUYRp8y3JFRQzuSWkc723dz61yniK6ea+SGKSaJAh3yp91a6eWV2tZGhYO+DwB7VzFjYXl4fM8vygsnzs/Vh6CvcwjgouUtxZY6cXKpPoa+lNL/AGEGllZ22kByckgdKzWO4852k1vTmGzsthPykHAPUfSucmYpAceua56klOfMjwczkqlbmjsVTm5u/KXdlvlXHrXs+haf/Z2k21u2C6JgsO/evOPBOlxalrDSSEg26iUAdz0r1lcKoUdq9DC09LmuFp6cw8UUUortOwWiiigAxSYopaAGbBjGOKhubSC5j2TxCRR2IBxVimuCV96LITSe5w+u+CdPSxmnskeOZRvHJIPtivOhcTL8p+XnGCOa97ZdwAYZHeue1PwhpeoMZHiZHzkFWxXHXwqlqjjr4VS1iebWtw6fvYzhxxt7Vp292t0jW8w5buOKwb6GTT9QntsOgVztyeozViOU4RgQG715U6bjozz/AN5Tep0iWUUbwtj/AFQwpzUyxrHkAYZjk4rP0+9MqOkjAHsTUGvMWVFjklyVyTGSKeHoKcrM9bBU/rLUUzXMkcZzK4RP7x45pqSxzoSrh1zjKmuRukmkS3hjeV9yFm35bkDNdDo0Bg0uPeFDOAxG3GK6sRhlTjc9HFYD6tG/MWxbIACu7n3qRbZFTAH60gmiCk7l47ZrLm1NlciPFeZ7K+p5ssZJL4jUNuGTbtAHrup0gjiVVdlIx65rCk1GeSPG5R+FVJZXYDdJx+VWqaOeePnLQ27nUooHAiUH6Cs/+0ZwSUIA9CM1nvMsa5U7jVU3cm729K1SlscrrVWmk9zQvb13i3ysDsBI4rIinmniPmnJPI4xSzs0wKk43DFNXaoQZOAMVahbUzUZKPvnpXw7so1sJbwcSMxRvcCu4P3q4X4dXqGynsx95D5h+hruuuM969jD2UEevhrKmhxpRSe1LWxuLRRRQAUUUUAFIaWkNABjimNzn2p/amMCeKLBY5Hxd4eGp2v2m2hzeLwuP4q8zkjnsrnybgbJUb5lb+Ef1r3nywTmsrVvD1hqyO0tvGZtuFk5BBrlrYdSRzVsOpank0cwkyVIJXritWwvNoKTEhR0zWXrGk3Wh34hmKhm+ZQOhGetZTXN1Hcja25T/f6V5jpcjPL9+jPQ7CbUIYAfsw3se47Vmvd3DEhpWCnsaxzdSuOdseOgTv8AnUsKXV1KsUKu7HpjmkozmzSdWpUdi00yA8vTBcRdN1a9n4H1a8hEjBIs9pOtWo/h3qfmruuIAmfmwecVtDCyFHDTZzMtyX+WJlx6ioYxLdOETc7E4C9TXrlt4N0iO3WOS0jkOOWJPJrQs9A0uwfzLazjjfpkVvHBnRDB9zydPCuuMgKWEhB6Ekf41oaX4J1K6nH2yBraLuW6n8q9Y2YPApcfnW8cMludEcJBHITeDbC30mdILcS3Gw7Hbk57CvP00DVWlS3WzPmY5AHfFe3baaIQMnA3euKqWGiyp4aMjhPA+jahpt1dPeWzQq8QUEkHnNd6KQR4PtTgDWlOHKrGlOHIrCgUvegUVoaC0UUUAFFFFABSGlooASjiloxQAlM6Z7CpMU3AoYHB+P4LFoI5J3AuQuE9SM15v16HIHSvZvEHha018bpWaOdE2JIDkL3+7xmufX4ax/ZWibU905IKyCDAUDqNu7nNcNWhKTujhrUakpXiYvgvRrPWJrr7XGXWILxnHWvQtO0TT9MG21tkTJzzyf1qt4a8MReH4XHnmeaTAZ9u0H8MnFb3lr9a6KVPlSub0qXKtVqJgDAIyaMc04AAUBa2NnpsJg04UUUMYUUuKKQCUUtFMBtOHSjFFACUZpaMUAf/2Q=='}], 'dimensions': {'dpi': 200, 'height': 2339, 'width': 1654}}], 'model': 'mistral-ocr-2503-completion', 'usage_info': {'pages_processed': 1, 'doc_size_bytes': 60230}} ``` {% endcode %}
Response after parsing **Contents of the `output.md` file:** {% code overflow="wrap" %} ```markdown ![img-0.jpeg.jpg](output_images\img-0.jpeg.jpg) # Characteristics of plant cells Plant cells have cell walls composed of cellulose, hemicelluloses, and pectin and constructed outside the cell membrane. Their composition contrasts with the cell walls of fungi, which are made of chitin, of bacteria, which are made of peptidoglycan and of archaea, which are made of pseudopeptidoglycan. In many cases lignin or suberin are secreted by the protoplast as secondary wall layers inside the primary cell wall. Cutin is secreted outside the primary cell wall and into the outer layers of the secondary cell wall of the epidermal cells of leaves, stems and other above-ground organs to form the plant cuticle. Cell walls perform many essential functions. They provide shape to form the tissue and organs of the plant, and play an important role in intercellular communication and plant-microbe interactions. ${ }^{[1]}$ The cell wall is flexible during growth and has small pores called plasmodesmata that allow the exchange of nutrients and hormones between cells. ${ }^{[2]}$ ``` {% endcode %} **Content of `output_images` subfolder**
**How it looks in any Markdown viewer:**
It looks almost like the original PDF, but all the text has been recognized, and the markdown is easy to use further, for example, to embed in a web page. Enjoy!
[^1]: An integrated development environment (IDE) is a software application that helps programmers write, test, and debug software code efficiently. --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-tiny.md # mistral-tiny

This documentation is valid for the following list of our models:

  • mistralai/mistral-tiny
Try in Playground
## Model Overview A lightweight language model optimized for efficient text generation, summarization, and code completion tasks. It is designed to operate effectively in resource-constrained environments while maintaining high performance. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["mistralai/mistral-tiny"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"mistralai/mistral-tiny"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"mistralai/mistral-tiny", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'mistralai/mistral-tiny', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'gen-1744193337-VPTpAxEsMzJ79PKh5w4X', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello! How can I assist you today? Feel free to ask me anything, I'm here to help. If you are looking for general information or help with a specific question, please let me know. I am happy to help with a wide range of topics, including but not limited to, technology, science, health, education, and more. Enjoy your day!", 'refusal': None}}], 'created': 1744193337, 'model': 'mistralai/mistral-tiny', 'usage': {'prompt_tokens': 2, 'completion_tokens': 42, 'total_tokens': 44}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mixtral-8x7b-instruct-v0.1.md # Mixtral-8x7B-Instruct

This documentation is valid for the following list of our models:

  • mistralai/Mixtral-8x7B-Instruct-v0.1
Try in Playground
## Model Overview A state-of-the-art AI model designed for instruction-following tasks. With a massive 56 billion parameter configuration, it excels in understanding and executing complex instructions, providing accurate and relevant responses across a wide range of contexts. This model is ideal for creating highly interactive and intelligent systems that can perform specific tasks based on user commands. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["mistralai/Mixtral-8x7B-Instruct-v0.1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"mistralai/Mixtral-8x7B-Instruct-v0.1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"mistralai/Mixtral-8x7B-Instruct-v0.1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'mistralai/Mixtral-8x7B-Instruct-v0.1', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npPEmQg-4yUbBN-92d909e708872095', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': ' Hello! How can I help you today? If you have any questions or need assistance with a topic related to mathematics, I will do my best to help you understand. Just let me know what you are working on or what you are curious about.', 'tool_calls': []}}], 'created': 1744191581, 'model': 'mistralai/Mixtral-8x7B-Instruct-v0.1', 'usage': {'prompt_tokens': 11, 'completion_tokens': 66, 'total_tokens': 77}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/model-database.md # All Model IDs {% hint style="info" %} If you need to select models based on specific parameters for your task, visit the [dedicated page on our official website](https://aimlapi.com/models/), which offers convenient filtering options. On the selected model’s page, you can find detailed technical and commercial information. {% endhint %} {% hint style="success" %} To fetch the complete model list via the API, see [the API reference](https://docs.aimlapi.com/api-references/service-endpoints/complete-model-list) for the relevant service endpoint. {% endhint %} The section **Full List of Model IDs** below lists the identifiers of all available and deprecated models, grouped by category. These IDs are used to specify the exact models in your code, like this:
If you already know the model ID, use the page search function (Ctrl+F for Win/Linux, Command+F for Mac) to locate it. The hyperlink will take you directly to the model's API Reference page. {% hint style="success" %} **New Model Request** Can't find the model you need? Join our [Discord community](https://discord.gg/8CwhkUuCR6) to propose new models for integration into our API offerings. Your contributions help us grow and serve you better. {% endhint %} ## Full List of Model IDs ### Text Models (LLM)
Model ID + API Reference linkDeveloperContextModel Card
gpt-3.5-turboOpen AI16,000Chat GPT 3.5 Turbo
gpt-3.5-turbo-0125Open AI16,000Chat GPT-3.5 Turbo 0125
gpt-3.5-turbo-1106Open AI16,000Chat GPT-3.5 Turbo 1106
gpt-4oOpen AI128,000Chat GPT-4o
gpt-4o-2024-08-06Open AI128,000GPT-4o-2024-08-06
gpt-4o-2024-05-13Open AI128,000GPT-4o-2024-05-13
gpt-4o-miniOpen AI128,000Chat GPT 4o mini
gpt-4o-mini-2024-07-18Open AI128,000GPT 4o mini
chatgpt-4o-latestOpen AI128,000-
gpt-4o-audio-previewOpen AI128,000GPT-4o Audio Preview
gpt-4o-mini-audio-previewOpen AI128,000GPT-4o mini Audio
gpt-4o-search-previewOpen AI128,000GPT-4o Search Preview
gpt-4o-mini-search-previewOpen AI128,000GPT-4o Mini Search Preview
gpt-4-turboOpen AI128,000Chat GPT 4 Turbo
gpt-4-turbo-2024-04-09Open AI128,000-
gpt-4Open AI8,000Chat GPT 4
gpt-4-0125-previewOpen AI8,000-
gpt-4-1106-previewOpen AI8,000-
o1Open AI200,000OpenAI o1
openai/o3-2025-04-16Open AI200,000o3
o3-miniOpen AI200,000OpenAI o3 mini
openai/o3-proOpen AI200,000o3-pro
openai/gpt-4.1-2025-04-14Open AI1,000,000GPT-4.1
openai/gpt-4.1-mini-2025-04-14Open AI1,000,000GPT-4.1 Mini
openai/gpt-4.1-nano-2025-04-14Open AI1,000,000GPT-4.1 Nano
openai/o4-mini-2025-04-16Open AI200,000GPT-o4-mini-2025-04-16
openai/gpt-oss-20bOpen AI128,000GPT OSS 20B
openai/gpt-oss-120bOpen AI128,000GPT OSS 120B
openai/gpt-5-2025-08-07Open AI400,000GPT-5
openai/gpt-5-mini-2025-08-07Open AI400,000GPT-5 Mini
openai/gpt-5-nano-2025-08-07Open AI400,000GPT-5 Nano
openai/gpt-5-chat-latestOpen AI400,000GPT-5 Chat
openai/gpt-5-1Open AI128,000GPT-5.1
openai/gpt-5-1-chat-latestOpen AI128,000GPT-5.1 Chat Latest
openai/gpt-5-1-codexOpen AI400,000GPT-5.1 Codex
openai/gpt-5-1-codex-miniOpen AI400,000GPT-5.1 Codex Mini
openai/gpt-5-2Open AI400,000GPT-5.2
openai/gpt-5-2-chat-latestOpen AI400,000GPT-5.2 Chat Latest
openai/gpt-5-2-proOpen AI400,000GPT-5.2 Pro
openai/gpt-5-2-codexOpen AI400,000GPT-5.2 Codex
claude-3-opus-20240229Anthropic200,000Claude 3 Opus
claude-3-haiku-20240307Anthropic200,000-
claude-3-5-haiku-20241022Anthropic200,000-
claude-3-7-sonnet-20250219Anthropic200,000Claude 3.7 Sonnet
anthropic/claude-opus-4Anthropic200,000Claude 4 Opus
anthropic/claude-opus-4.1
claude-opus-4-1
claude-opus-4-1-20250805
Anthropic200,000Claude Opus 4.1
anthropic/claude-sonnet-4Anthropic200,000Claude 4 Sonnet

claude-sonnet-4-5-20250929

anthropic/claude-sonnet-4.5

claude-sonnet-4-5

Anthropic200,000Claude 4.5 Sonnet

anthropic/claude-haiku-4.5
claude-haiku-4-5

claude-haiku-4-5-20251001

Anthropic200,000Claude 4.5 Haiku
anthropic/claude-opus-4-5
claude-opus-4-5
claude-opus-4-5-20251101
Anthropic200,000Claude 4.5 Opus
Qwen/Qwen2.5-7B-Instruct-TurboAlibaba Cloud32,000Qwen 2.5 7B Instruct Turbo
qwen-maxAlibaba Cloud32,000Qwen Max
qwen-max-2025-01-25Alibaba Cloud32,000Qwen Max 2025-01-25
qwen-plusAlibaba Cloud131,000Qwen Plus
qwen-turboAlibaba Cloud1,000,000Qwen Turbo
Qwen/Qwen2.5-72B-Instruct-TurboAlibaba Cloud32,000Qwen 2.5 72B Instruct Turbo
Qwen/Qwen3-235B-A22B-fp8-tputAlibaba Cloud32,000Qwen 3 235B A22B
alibaba/qwen3-32bAlibaba Cloud131,000Qwen3-32B
alibaba/qwen3-coder-480b-a35b-instructAlibaba Cloud262,000Qwen3 Coder
alibaba/qwen3-235b-a22b-thinking-2507Alibaba Cloud262,000Qwen3 235B A22B Thinking
alibaba/qwen3-next-80b-a3b-instructAlibaba Cloud262,000Qwen3-Next-80B-A3B Instruct
alibaba/qwen3-next-80b-a3b-thinkingAlibaba Cloud262,000Qwen3-Next-80B-A3B Thinking
alibaba/qwen3-max-previewAlibaba Cloud258,000Qwen3-Max Preview
alibaba/qwen3-max-instructAlibaba Cloud262,000Qwen3-Max Instruct
qwen3-omni-30b-a3b-captionerAlibaba Cloud65,000qwen3-omni-30b-a3b-captioner
alibaba/qwen3-vl-32b-instructAlibaba Cloud126,000Qwen3 VL 32B Instruct
alibaba/qwen3-vl-32b-thinkingAlibaba Cloud126,000Qwen3 VL 32B Thinking
anthracite-org/magnum-v4-72bAnthracite32,000Magnum v4 72B
baidu/ernie-4-5-8k-previewBaidu8,000ERNIE 4.5
baidu/ernie-4.5-0.3bBaidu120,000ERNIE 4.5
baidu/ernie-4.5-21b-a3bBaidu120,000ERNIE 4.5
baidu/ernie-4.5-21b-a3b-thinkingBaidu131,000ERNIE 4.5
baidu/ernie-4.5-vl-28b-a3bBaidu30,000ERNIE 4.5 VL
baidu/ernie-4.5-vl-424b-a47bBaidu123,000ERNIE 4.5 VL
baidu/ernie-4.5-300b-a47bBaidu123,000ERNIE 4.5
baidu/ernie-4.5-300b-a47b-paddleBaidu123,000ERNIE 4.5
baidu/ernie-4-5-turbo-128kBaidu128,000ERNIE 4.5
baidu/ernie-4-5-turbo-vl-32kBaidu32,000ERNIE 4.5 VL
baidu/ernie-5-0-thinking-previewBaidu128,000ERNIE 5.0
baidu/ernie-5-0-thinking-latestBaidu128,000ERNIE 5.0
baidu/ernie-x1-turbo-32kBaidu32,000Coming Soon
baidu/ernie-x1-1-previewBaidu64,000Coming Soon
bytedance/seed-1-8ByteDance256,000Seed 1.8
cohere/command-aCohere256,000Command A
deepseek-chat or
deepseek/deepseek-chat or
deepseek/deepseek-chat-v3-0324
DeepSeek128,000DeepSeek V3
deepseek/deepseek-r1 or
deepseek-reasoner
DeepSeek128,000DeepSeek R1
deepseek/deepseek-prover-v2DeepSeek164,000DeepSeek Prover V2
deepseek/deepseek-chat-v3.1DeepSeek128,000DeepSeek V3.1 Chat
deepseek/deepseek-reasoner-v3.1DeepSeek128,000DeepSeek V3.1 Reasoner
deepseek/deepseek-thinking-v3.2-expDeepSeek128,000DeepSeek V3.2-Exp Thinking
deepseek/deepseek-non-thinking-v3.2-expDeepSeek128,000DeepSeek V3.2-Exp Non-Thinking
deepseek/deepseek-reasoner-v3.1-terminusDeepSeek128,000DeepSeek V3.1 Terminus Reasoning
deepseek/deepseek-non-reasoner-v3.1-terminusDeepSeek128,000DeepSeek V3.1 Terminus Non-Reasoning
deepseek/deepseek-v3.2-specialeDeepSeek128,000DeepSeek V3.2 Speciale
gemini-2.0-flash-expGoogle1,000,000Gemini 2.0 Flash Experimental
gemini-2.0-flashGoogle1,000,000Gemini 2.0 Flash
google/gemini-2.5-flash-lite-previewGoogle1,000,000
google/gemini-2.5-flashGoogle1,000,000Gemini 2.5 Flash
google/gemini-3-flash-previewGoogle1,000,000Gemini 3 Flash
google/gemini-2.5-proGoogle1,000,000Gemini 2.5 Pro
google/gemini-3-pro-previewGoogle200,000Gemini 3 Pro Preview
google/gemma-3-4b-itGoogle128,000Gemma 3 (4B)
google/gemma-3-12b-itGoogle128,000Gemma 3 (12B)
google/gemma-3-27b-itGoogle128,000Gemma 3 (27B)
google/gemma-3n-e4b-itGoogle8,192Gemma 3n 4B
gryphe/mythomax-l2-13bGryphe4,000MythoMax-L2 (13B)
mistralai/Mixtral-8x7B-Instruct-v0.1Mistral AI64,000Mixtral-8x7B Instruct v0.1
meta-llama/Llama-3.3-70B-Instruct-TurboMeta128,000Meta Llama 3.3 70B Instruct Turbo
meta-llama/Llama-3.2-3B-Instruct-TurboMeta131,000Llama 3.2 3B Instruct Turbo
meta-llama/Meta-Llama-3-8B-Instruct-LiteMeta9,000Llama 3 8B Instruct Lite
meta-llama/Meta-Llama-3.1-405B-Instruct-TurboMeta4,000Llama 3.1 (405B) Instruct Turbo
meta-llama/Meta-Llama-3.1-8B-Instruct-TurboMeta128,000Llama 3.1 8B Instruct Turbo
meta-llama/Meta-Llama-3.1-70B-Instruct-TurboMeta128,000Llama 3.1 70B Instruct Turbo
meta-llama/llama-4-scoutMeta1,000,000Llama 4 Scout
meta-llama/llama-4-maverickMeta256,000Llama 4 Maverick
meta-llama/llama-3.3-70b-versatileMeta131,000Llama 3.3 70B Versatile
mistralai/Mistral-7B-Instruct-v0.2Mistral AI32,000Mistral (7B) Instruct v0.2
mistralai/Mistral-7B-Instruct-v0.3Mistral AI32,000Mistral (7B) Instruct v0.3
mistralai/mistral-tinyMistral AI32,000Mistral Tiny
mistralai/mistral-nemoMistral AI128,000Mistral Nemo
nvidia/llama-3.1-nemotron-70b-instructNVIDIA128,000Llama 3.1 Nemotron 70B Instruct
nvidia/nemotron-nano-9b-v2NVIDIA128,000Nemotron Nano 9B V2
nvidia/nemotron-nano-12b-v2-vlNVIDIA128,000Nemotron Nano 12B V2 VL
MiniMax-Text-01MiniMax1,000,000MiniMax-Text-01
minimax/m1MiniMax1,000,000MiniMax M1
minimax/m2MiniMax200,000MiniMax M2
minimax/m2-1MiniMax204,800MiniMax-M2.1
moonshot/kimi-k2-previewMoonshot131,000Kimi-K2
moonshot/kimi-k2-0905-previewMoonshot256,000Kimi-K2
moonshot/kimi-k2-turbo-previewMoonshot256,000Kimi K2 Turbo Preview
nousresearch/hermes-4-405bNousResearch131,000-
perplexity/sonarPerplexity128,000Sonar
perplexity/sonar-proPerplexity200,000Sonar Pro
x-ai/grok-3-betaxAI131,000Grok 3 Beta
x-ai/grok-3-mini-betaxAI131,000Grok 3 Beta Mini
x-ai/grok-4-07-09xAI256,000Grok 4
x-ai/grok-code-fast-1xAI256,000Grok Code Fast 1
x-ai/grok-4-fast-non-reasoningxAI2,000,000Grok 4 Fast
x-ai/grok-4-fast-reasoningxAI2,000,000Grok 4 Fast Reasoning
x-ai/grok-4-1-fast-non-reasoningxAI2,000,000Grok 4.1 Fast Non-Reasoning
x-ai/grok-4-1-fast-reasoningxAI2,000,000Grok 4.1 Fast Reasoning
zhipu/glm-4.5-airZhipu128,000GLM-4.5 Air
zhipu/glm-4.5Zhipu128,000GLM-4.5
zhipu/glm-4.6Zhipu200,000GLM-4.6
zhipu/glm-4.7Zhipu200,000GLM-4.7
### Image Models
Model ID + API Reference linkDeveloperContextModel Card
alibaba/qwen-imageAlibaba CloudQwen Image
alibaba/qwen-image-editAlibaba CloudQwen Image Edit
alibaba/z-image-turboAlibaba CloudZ-Image Turbo
alibaba/z-image-turbo-loraAlibaba CloudZ-Image Turbo LoRA
alibaba/wan2.2-t2i-plusAlibaba CloudWan 2.2 Plus
alibaba/wan2.2-t2i-flashAlibaba CloudWan 2.2 Flash
alibaba/wan2.5-t2i-previewAlibaba CloudWan 2.5 Preview
alibaba/wan-2-6-imageAlibaba CloudWan 2.6
bytedance/seedream-3.0ByteDanceSeedream 3.0
bytedance/seedream-v4-text-to-imageByteDanceSeedream 4 Text-to-Image
bytedance/seedream-v4-editByteDanceSeedream 4 Edit
bytedance/usoByteDanceUSO
bytedance/seedream-4-5ByteDanceSeedream 4.5
flux-proFluxFLUX.1 [pro]
flux-pro/v1.1FluxFLUX 1.1 [pro]
flux-pro/v1.1-ultraFluxFLUX 1.1 [pro ultra]
flux-realismFluxFLUX Realism LoRA
flux/devFluxFLUX.1 [dev]
flux/dev/image-to-imageFlux-
flux/schnellFluxFLUX.1 [schnell]
flux/kontext-max/text-to-imageFluxFLUX.1 Kontext [max]
flux/kontext-max/image-to-imageFluxFLUX.1 Kontext [max]
flux/kontext-pro/text-to-imageFluxFlux.1 Kontext [pro]
flux/kontext-pro/image-to-imageFluxFlux.1 Kontext [pro]
flux/srpoFluxFLUX.1 SRPO Text-to-Image
flux/srpo/image-to-imageFluxFLUX.1 SRPO Image-to-Image
blackforestlabs/flux-2FluxFLUX.2
blackforestlabs/flux-2-editFluxFLUX.2 Edit
blackforestlabs/flux-2-loraFluxFlux 2 LoRA
blackforestlabs/flux-2-lora-editFluxFlux 2 LoRA Edit
blackforestlabs/flux-2-proFluxFLUX.2 [pro]
blackforestlabs/flux-2-pro-editFluxFLUX.2 [pro] Edit
imagen-3.0-generate-002GoogleImagen 3
google/imagen4/previewGoogleImagen 4 Preview
imagen-4.0-ultra-generate-preview-06-06GoogleImagen 4 Ultra
google/gemini-2.5-flash-imageGoogleGemini 2.5 Flash Image
google/gemini-2.5-flash-image-editGoogleGemini 2.5 Flash Image Edit
google/gemini-3-pro-image-previewGoogleGemini 3 Pro Image (Nano Banana Pro)
google/gemini-3-pro-image-preview-editGoogleGemini 3 Pro Image Edit (Nano Banana Pro)
google/imagen-4.0-generate-001GoogleImagen 4.0 Generate
google/imagen-4.0-fast-generate-001GoogleImagen 4.0 Fast Generate
google/imagen-4.0-ultra-generate-001GoogleImagen 4.0 Ultra Generate
klingai/image-o1Kling AIKling Image O1
dall-e-2OpenAIOpenAI DALL·E 2
dall-e-3OpenAIOpenAI DALL·E 3
openai/gpt-image-1gpt-image-1
openai/gpt-image-1-miniOpenAIGPT Image 1 Mini
openai/gpt-image-1-5OpenAIGPT Image 1.5
recraft-v3Recraft AIRecraft v3
reve/create-imageReveReve Create Image
reve/edit-imageReveReve Edit Image
reve/remix-edit-imageReveReve Remix Image
stable-diffusion-v3-mediumStability AIStable Diffusion 3
stable-diffusion-v35-largeStability AIStable Diffusion 3.5 Large
hunyuan/hunyuan-image-v3-text-to-imageTencentHunyuanImage 3.0
topaz-labs/sharpenTopaz LabsSharpen
topaz-labs/sharpen-genTopaz LabsSharpen Generative
x-ai/grok-2-imagexAIGrok 2 Image
### Video Models
Model ID + API Reference linkDeveloperContextModel Card
alibaba/wan2.1-t2v-plusAlibaba CloudWan2.1 Plus
alibaba/wan2.1-t2v-turboAlibaba CloudWan2.1 Turbo
alibaba/wan2.2-t2v-plusAlibaba CloudWan 2.2 T2V
alibaba/wan2.5-t2v-previewAlibaba CloudWan 2.5 Text-to-Video
alibaba/wan2.5-i2v-previewAlibaba CloudWan 2.5 Image-to-Video
alibaba/wan2.2-14b-animate-replaceAlibaba CloudWan 2.2 14b animate replace
alibaba/wan2.2-14b-animate-moveAlibaba CloudWan 2.2 14b animate move
alibaba/wan2.2-vace-fun-a14b-reframeAlibaba CloudWan 2.2 vace fun 14b reframe
alibaba/wan2.2-vace-fun-a14b-outpaintingAlibaba CloudWan 2.2 vace fun 14b outpainting
alibaba/wan2.2-vace-fun-a14b-inpaintingAlibaba CloudWan 2.2 vace fun 14b inpainting
alibaba/wan2.2-vace-fun-a14b-poseAlibaba CloudWan 2.2 vace fun 14b pose
alibaba/wan2.2-vace-fun-14b-depthAlibaba CloudWan 2.2 vace fun 14b depth
alibaba/wan2.5-t2v-previewAlibaba CloudWan 2.5 Preview
alibaba/wan2.5-i2v-previewAlibaba Cloud-
alibaba/wan-2-6-t2vAlibaba CloudWan 2.6 Text-to-Video
alibaba/wan-2-6-i2vAlibaba CloudWan 2.6 Image-to-Video
alibaba/wan-2-6-r2vAlibaba CloudWan 2.6 Reference-to-Video
bytedance/seedance-1-0-lite-t2vByteDanceSeedance 1.0 lite Text to Video
bytedance/seedance-1-0-lite-i2vByteDanceSeedance 1.0 lite Image to Video
bytedance/seedance-1-0-pro-t2vByteDanceSeedance 1.0 Pro
bytedance/seedance-1-0-pro-i2vByteDanceSeedance 1.0 Pro
bytedance/seedance-1-0-pro-fastByteDanceSeedance 1.0 Pro Fast
bytedance/omnihumanByteDanceOmniHuman
bytedance/omnihuman/v1.5ByteDanceOmniHuman v1.5
veo2GoogleVeo2 Text-to-Video
veo2/image-to-videoGoogleVeo2 Image-to-Video
google/veo3GoogleVeo 3
google/veo-3.0-i2vGoogleVeo 3 I2V
google/veo-3.0-fastGoogleVeo 3 Fast
google/veo-3.0-i2v-fastGoogleVeo 3 I2V Fast
google/veo-3.1-t2vGoogleVeo 3.1 Text-to-Video
google/veo-3.1-t2v-fastGoogleVeo 3.1 Fast Text-to-Video
google/veo-3.1-i2vGoogleVeo 3.1 Image-to-Video
google/veo-3.1-i2v-fastGoogleVeo 3.1 Fast Image-to-Video
google/veo-3.1-reference-to-videoGoogleVeo 3.1 Reference-to-Video
google/veo-3.1-first-last-image-to-videoGoogleVeo 3.1 First-Last Frame-to-Video
google/veo-3.1-first-last-image-to-video-fastGoogleVeo 3.1 Fast First-Last Frame-to-Video
google/veo3-1-extend-videoGoogleVeo 3.1 Extend Video
google/veo3-1-fast-extend-videoGoogleVeo 3.1 Fast Extend Video
kling-video/v1/standard/image-to-videoKling AIKling AI (image-to-video)
kling-video/v1/standard/text-to-videoKling AIKling AI (text-to-video)
kling-video/v1/pro/image-to-videoKling AIKling AI (image-to-video)
kling-video/v1/pro/text-to-videoKling AIKling AI (text-to-video)
kling-video/v1.6/standard/text-to-videoKling AIKling 1.6 Standard
kling-video/v1.6/standard/image-to-videoKling AIKling 1.6 Standard
kling-video/v1.6/pro/image-to-videoKling AIKling 1.6 Pro
kling-video/v1.6/pro/text-to-videoKling AIKling 1.6 Pro
klingai/kling-video-v1.6-pro-effectsKling AIKling 1.6 Pro Effects
klingai/kling-video-v1.6-standard-effectsKling AIKling 1.6 Standard Effects
kling-video/v1.6/standard/multi-image-to-videoKling AIKling V1.6 Multi-Image-to-Video
klingai/v2-master-image-to-videoKling AIKling 2.0 Master
klingai/v2-master-text-to-videoKling AIKling 2.0 Master
kling-video/v2.1/standard/image-to-videoKling AIKling V2.1 Standard I2V
kling-video/v2.1/pro/image-to-videoKling AIKling V2.1 Pro I2V
klingai/v2.1-master-image-to-videoKling AIling 2.1 Master
klingai/v2.1-master-text-to-videoKling AIKling 2.1 Master
klingai/v2.5-turbo/pro/image-to-videoKling AIKling Video v2.5 Turbo Pro Image-to-Video
klingai/v2.5-turbo/pro/text-to-videoKling AIKling Video v2.5 Turbo Pro Text-to-Video
klingai/avatar-standardKling AIKling AI Avatar Standard
klingai/avatar-proKling AIKling AI Avatar Pro
klingai/video-v2-6-pro-text-to-videoKling AIKling 2.6 Pro Text-to-Video
klingai/video-v2-6-pro-image-to-videoKling AIKling 2.6 Pro Image-to-Video
klingai/video-o1-image-to-videoKling AIKling Video O1 Image to Video
klingai/video-o1-reference-to-videoKling AIKling Video O1 Reference-to-Video
klingai/video-o1-video-to-video-editKling AIKling Video O1 Video to Video Edit
klingai/video-o1-video-to-video-referenceKling AIKling Video O1 Video-to-Video Reference
klingai/video-v2-6-pro-motion-controlKling AIComing Soon
krea/krea-wan-14b/text-to-videoKreaKrea WAN 14B Text-to-Video
krea/krea-wan-14b/video-to-videoKreaKrea WAN 14B Video-to-Video
ltxv/ltxv-2LTXVComing Soon
ltxv/ltxv-2-fastLTXVComing Soon
luma/ray-2Luma AIRay 2
luma/ray-flash-2Luma AIRay Flash 2
magic/text-to-videoMagicMagic Video
magic/image-to-videoMagicMagic Video
magic/video-to-videoMagicMagic Video
video-01MiniMaxMiniMax Video-01
video-01-live2dMiniMax-
minimax/hailuo-02MiniMaxHailuo 02
minimax/hailuo-2.3MiniMaxHailuo 2.3
minimax/hailuo-2.3-fastMiniMaxHailuo 2.3 Fast
sora-2-t2vOpenAI-
sora-2-i2vOpenAI-
sora-2-pro-t2vOpenAI-
sora-2-pro-i2vOpenAI-
pixverse/v5/text-to-videoPixVersePixverse v5 Text-to-Video
pixverse/v5/image-to-videoPixVersePixverse v5 Image-to-Video
pixverse/v5/transitionPixVersePixverse v5 Transition
pixverse/v5-5-text-to-videoPixVersePixVerse V5.5 Text-to-Video
pixverse/v5-5-image-to-videoPixVersePixverse v5.5 Image-to-Video
pixverse/lip-syncPixVerseComing Soon
gen3a_turboRunwayRunway Gen-3 turbo
runway/gen4_turboRunwayRunway Gen-4 Turbo
runway/gen4_alephRunwayAleph
runway/act_twoRunwayRunway Act Two
sber-ai/kandinsky5-t2vSber AIKandinsky 5 Standard
sber-ai/kandinsky5-distill-t2vSber AIKandinsky 5 Distill
tencent/hunyuan-video-foleyTencentHunyuanVideo Foley
veed/fabric-1.0Veedfabric-1.0
veed/fabric-1.0-fastVeedfabric-1.0-fast
### Voice/Speech Models #### Speech-to-Text
Model ID + API Reference linkDeveloperContextModel Card
aai/slam-1Assembly AISlam 1
aai/universalAssembly AIUniversal
#g1_nova-2-automotiveDeepgramDeepgram Nova-2
#g1_nova-2-conversationalaiDeepgramDeepgram Nova-2
#g1_nova-2-drivethruDeepgramDeepgram Nova-2
#g1_nova-2-financeDeepgramDeepgram Nova-2
#g1_nova-2-generalDeepgramDeepgram Nova-2
#g1_nova-2-medicalDeepgramDeepgram Nova-2
#g1_nova-2-meetingDeepgramDeepgram Nova-2
#g1_nova-2-phonecallDeepgramDeepgram Nova-2
#g1_nova-2-videoDeepgramDeepgram Nova-2
#g1_nova-2-voicemailDeepgramDeepgram Nova-2
#g1_whisper-tinyOpenAI-
#g1_whisper-smallOpenAI-
#g1_whisper-baseOpenAI-
#g1_whisper-mediumOpenAI-
#g1_whisper-largeOpenAIWhisper
openai/gpt-4o-transcribeOpenAIGPT-4o Transcribe
openai/gpt-4o-mini-transcribeOpenAIGPT-4o Mini Transcribe
#### Text-to-Speech
Model IDDeveloperContextModel Card
alibaba/qwen3-tts-flashAlibaba CloudQwen3-TTS-Flash
#g1_aura-angus-enDeepgramAura
#g1_aura-arcas-enDeepgramAura
#g1_aura-asteria-enDeepgramAura
#g1_aura-athena-enDeepgramAura
#g1_aura-helios-enDeepgramAura
#g1_aura-hera-enDeepgramAura
#g1_aura-luna-enDeepgramAura
#g1_aura-orion-enDeepgramAura
#g1_aura-orpheus-enDeepgramAura
#g1_aura-perseus-enDeepgramAura
#g1_aura-stella-enDeepgramAura
#g1_aura-zeus-enDeepgramAura
#g1_aura-2-amalthea-enDeepgramAura 2
#g1_aura-2-andromeda-enDeepgramAura 2
#g1_aura-2-apollo-enDeepgramAura 2
#g1_aura-2-arcas-enDeepgramAura 2
#g1_aura-2-aries-enDeepgramAura 2
#g1_aura-2-asteria-enDeepgramAura 2
#g1_aura-2-athena-enDeepgramAura 2
#g1_aura-2-atlas-enDeepgramAura 2
#g1_aura-2-aurora-enDeepgramAura 2
#g1_aura-2-callista-enDeepgramAura 2
#g1_aura-2-cora-enDeepgramAura 2
#g1_aura-2-cordelia-enDeepgramAura 2
#g1_aura-2-delia-enDeepgramAura 2
#g1_aura-2-draco-enDeepgramAura 2
#g1_aura-2-electra-enDeepgramAura 2
#g1_aura-2-harmonia-enDeepgramAura 2
#g1_aura-2-helena-enDeepgramAura 2
#g1_aura-2-hera-enDeepgramAura 2
#g1_aura-2-hermes-enDeepgramAura 2
#g1_aura-2-hyperion-enDeepgramAura 2
#g1_aura-2-iris-enDeepgramAura 2
#g1_aura-2-janus-enDeepgramAura 2
#g1_aura-2-juno-enDeepgramAura 2
#g1_aura-2-jupiter-enDeepgramAura 2
#g1_aura-2-luna-enDeepgramAura 2
#g1_aura-2-mars-enDeepgramAura 2
#g1_aura-2-minerva-enDeepgramAura 2
#g1_aura-2-neptune-enDeepgramAura 2
#g1_aura-2-odysseus-enDeepgramAura 2
#g1_aura-2-ophelia-enDeepgramAura 2
#g1_aura-2-orion-enDeepgramAura 2
#g1_aura-2-orpheus-enDeepgramAura 2
#g1_aura-2-pandora-enDeepgramAura 2
#g1_aura-2-phoebe-enDeepgramAura 2
#g1_aura-2-pluto-enDeepgramAura 2
#g1_aura-2-saturn-enDeepgramAura 2
#g1_aura-2-selene-enDeepgramAura 2
#g1_aura-2-thalia-enDeepgramAura 2
#g1_aura-2-theia-enDeepgramAura 2
#g1_aura-2-vesta-enDeepgramAura 2
#g1_aura-2-zeus-enDeepgramAura 2
#g1_aura-2-celeste-esDeepgramAura 2
#g1_aura-2-estrella-esDeepgramAura 2
#g1_aura-2-nestor-esDeepgramAura 2
elevenlabs/eleven_multilingual_v2ElevenLabsElevenLabs Multilingual v2
elevenlabs/eleven_turbo_v2_5ElevenLabsElevenLabs Turbo v2.5
hume/octave-2Hume AIOctave 2
inworld/tts-1InworldInworld TTS-1
inworld/tts-1-maxInworldInworld TTS-1-Max
microsoft/vibevoice-1.5bMicrosoftVibeVoice 1.5B
microsoft/vibevoice-7bMicrosoftVibeVoice 7B
openai/tts-1OpenAITTS-1
openai/tts-1-hdOpenAITTS-1 HD
openai/gpt-4o-mini-ttsOpenAIGPT-4o-mini-TTS
#### Voice Chat
Model IDDeveloperContextModel Card
elevenlabs/v3_alphaElevenLabsEleven v3 Alpha
minimax/speech-2.5-turbo-previewMiniMaxMiniMax Speech 2.5 Turbo
minimax/speech-2.5-hd-previewMiniMaxMiniMax Speech 2.5 HD
minimax/speech-2.6-turboMiniMaxMiniMax Speech 2.6 Turbo
minimax/speech-2.6-hdMiniMaxMiniMax Speech 2.6 HD
### Music Models
Model IDDeveloperContextModel Card
elevenlabs/eleven_musicElevenLabsEleven Music
google/lyria2GoogleLyria 2
stable-audioStability AIStable Audio
minimax-musicMinimax AI-
music-01Minimax AIMiniMax Music
minimax/music-1.5Minimax AIMiniMax Music 1.5
minimax/music-2.0Minimax AIMiniMax Music 2.0
### Content Moderation Models
Model ID + API Reference linkDeveloperContextModel Card
meta-llama/Llama-Guard-3-11B-Vision-TurboMeta128,000-
meta-llama/LlamaGuard-2-8bMeta8,000LlamaGuard 2 (8b)
meta-llama/Meta-Llama-Guard-3-8BMeta8,000Llama Guard 3 (8B)
### Vision Models #### Optical Character Recognition (OCR)
Model ID + API Reference linkDeveloperContextModel Card
The service has no Model IDGoogle-
mistral/mistral-ocr-latestMistral AI-
### 3D-Generating Models
Model ID + API Reference linkDeveloperContextModel Card
triposrTripo AIStable TripoSR 3D
tencent/hunyuan-partTencentHunyuan Part
### Embedding Models
Model ID + API Reference linkDeveloperContextModel Card
alibaba/qwen-text-embedding-v3Alibaba Cloud32,000Qwen Text Embedding v3
alibaba/qwen-text-embedding-v4Alibaba Cloud32,000Qwen Text Embedding v4
voyage-2Anthropic4,000-
voyage-code-2Anthropic16,000-
voyage-finance-2Anthropic32,000-
voyage-large-2Anthropic16,000-
voyage-large-2-instructAnthropic16,000Voyage Large 2 Instruct
voyage-law-2Anthropic16,000-
voyage-multilingual-2Anthropic32,000-
BAAI/bge-base-en-v1.5BAAI512BAAI-Bge-Base-1p5
BAAI/bge-large-en-v1.5BAAI512bge-large-en
text-multilingual-embedding-002Google2,000-
text-embedding-3-smallOpen AI8,000-
text-embedding-3-largeOpen AI8,000Text-embedding-3-large
text-embedding-ada-002Open AI8,000Text-embedding-ada-002
togethercomputer/m2-bert-80M-32k-retrievalTogether AI32,000M2-BERT-Retrieval-32k
*** ### Deprecated / No Longer Supported Models {% hint style="danger" %} These models are no longer available for API or Playground calls.\ Their description and API reference pages have also been removed from this documentation portal. {% endhint %}
Model IDDeveloperContextModel Card
luma/ray-1.6Luma AIRay 1.6
meta-llama/Llama-3-70b-chat-hfMeta8,000Llama 3 70B Instruct Reference
bytedance/seededit-3.0-i2iByteDanceSeedream 3.0
textembedding-gecko-multilingual@001Google2,000Textembedding-gecko-multilingual@001
textembedding-gecko@003Google2,000Textembedding-gecko@003
mistralai/codestral-2501Mistral AI256,000Mistral Codestral-2501
mistralai/Mistral-7B-Instruct-v0.1Mistral AI8,000Mistral (7B) Instruct v0.1
Qwen/Qwen2.5-Coder-32B-InstructAlibaba Cloud131,000Qwen 2.5 Coder
Qwen/QwQ-32BAlibaba Cloud131,000Qwq-32B
kling-video/v1.5/standard/text-to-videoKling AI128,000Kling 1.5 Standart
o1-mini
o1-mini-2024-09-12
OpenAI128,000OpenAI o1-mini
Qwen/Qwen2-72B-InstructAlibaba Cloud32,000Qwen 2 Instruct (72B)
claude-3-5-sonnet-20240620Anthropic200,000-
claude-3-5-sonnet-20241022Anthropic200,000Claude 3.5 Sonnet 20241022
cohere/command-r-plusCohere128,000Command R+
google/gemma-2-27b-itGoogle8,000Gemma 2 (27b)
NousResearch/Nous-Hermes-2-Mixtral-8x7B-DPONous Research32,000-
nvidia/Llama-3.1-Nemotron-70B-Instruct-HFNvidia128,000Llama 3.1 Nemotron 70B Instruct
meta-llama/Llama-3-8b-chat-hfMeta8,000Llama 3 8B Instruct Reference
meta-llama/Llama-3.2-90B-Vision-Instruct-TurboMeta131,000Llama 3.2 90B Vision Instruct Turbo
meta-llama/Llama-Vision-FreeMeta128,000-
meta-llama/Llama-3.2-11B-Vision-Instruct-TurboMeta131,000Llama 3.2 11B Vision Instruct Turbo
abab6.5s-chatMiniMax245,000-
openrouter/horizon-betaOpenRouter256,000-
openrouter/horizon-alphaOpenRouter256,000-
wan/v2.1/1.3b/text-to-videoAlibaba Cloud-Wan 2.1
o1-preview,
o1-preview-2024-09-12
OpenAI128,000OpenAI o1-preview
claude-3-sonnet-20240229,
anthropic/claude-3-sonnet,
claude-3-sonnet-latest
Anthropic200,000Claude 3 Sonnet
google/gemini-2.5-pro-preview,
google/gemini-2.5-pro-preview-05-06
Google1,000,000Gemini Pro 2.5 Preview
google/gemini-2.5-flash-previewGoogle1,000,000Gemini 2.5 Flash Preview
neversleep/llama-3.1-lumimaid-70bNeverSleep8,000Llama 3.1 Lumimaid 70b
x-ai/grok-betaxAI131,000Grok-2 Beta
gpt-4.5-previewOpenAI128,000Chat GPT 4.5 preview
gemini-1.5-flashGoogle1,000,000Gemini 1.5 Flash
gemini-1.5-proGoogle1,000,000Gemini 1.5 Pro
google/gemma-3-1b-itGoogle128,000Gemma 3 (1B)
togethercomputer/m2-bert-80M-8k-retrievalTogetherAI8,000M2-BERT-Retrieval-8k
togethercomputer/m2-bert-80M-2k-retrievalTogetherAI2,000M2-BERT-Retrieval-2K
Gryphe/MythoMax-L2-13b-LiteGryphe4,000-
mistralai/Mixtral-8x22B-Instruct-v0.1Mistral AI64,000Mixtral 8x22B Instruct
google/gemini-2.5-pro-exp-03-25Google1,000,000-
google/gemini-2.0-flash-thinking-exp-01Google1,000,000Gemini 2.0 Flash Thinking Experimental
ai21/jamba-1-5-miniAI21 Labs256,000Jamba 1.5 Mini
textembedding-gecko@001Google3,000-
google/gemini-pro or gemini-proGoogle32,000Gemini 1.0 Pro
meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo-128KMeta128,000-
stabilityai/stable-diffusion-xl-base-1.0Stability AIStable Diffusion XL 1.0
upstage/solar-10.7b-instruct-v1.0Upstage4,000Upstage SOLAR Instruct v1 (11B)
meta-llama/Llama-2-13b-chat-hfMeta4,100LLaMA-2 Chat (13B)
meta-llama/meta-llama-3-70b-instruct-turboMeta128,000-
google/gemma-2-9b-itGoogle8,000Gemma 2 (9B)
google/gemma-2b-itGoogle8,000Gemma Instruct (2B)
Gryphe/MythoMax-L2-13bGryphe4,000MythoMax-L2 (13B)
microsoft/WizardLM-2-8x22BMicrosoft64,000WizardLM 2-8 (22B)
Austism/chronos-hermes-13bAustism2,000Chronos Hermes 13b
databricks/dbrx-instructDatabricks32,000DBRX Instruct
deepseek-ai/deepseek-llm-67b-chatDeepSeek4,000Deepseek-LLM-67b-Chat
deepseek-ai/deepseek-coder-33b-instructDeepSeek16,000Deepseek Coder Instruct (33B)
Meta-Llama/Llama-2-7b-chat-hfMeta4,000LLaMA-2 Chat (7B)
Meta-Llama/Meta-Llama-3-70B-Instruct-LiteMeta8,000Llama 3 70B Instruct Lite
Meta-Llama/Llama-Guard-7bMeta4,000Llama Guard (7B)
meta-llama/Llama-2-7b-hfMeta4,000LLaMA-2 (7B)
meta-llama/Llama-3-8b-hfMeta8,000Llama-3 (8B)
codellama/CodeLlama-70b-hfMeta16,000Code Llama (70B)
codellama/CodeLlama-7b-Instruct-hfMeta16,000Code Llama Instruct (7B)
codellama/CodeLlama-13b-Instruct-hfMeta16,000Code Llama Instruct (13B)
codellama/CodeLlama-70b-Instruct-hfMeta4,000Code Llama Instruct (70B)
codellama/CodeLlama-70b-Python-hfMeta4,000Code Llama Python (70B)
mistralai/Mixtral-8x22B-Instruct-v0.1Mistral AI64,000Mixtral 8x22B Instruct
gpt-3.5-turbo-16k-0613OpenAI-
gpt-4-0613OpenAI128,000Chat GPT 4 Turbo
Qwen/Qwen-14B-ChatAlibaba Cloud8,000Qwen Chat (14B)
Qwen/Qwen1.5-0.5BAlibaba Cloud32,000Qwen 1.5 (0.5B)
Qwen/Qwen1.5-1.8BAlibaba Cloud32,000Qwen 1.5 (1.8B)
Qwen/Qwen1.5-4BAlibaba Cloud32,000Qwen 1.5 (4B)
Qwen/Qwen1.5-1.8B-ChatAlibaba Cloud32,000Qwen 1.5 Chat (1.8B)
Qwen/Qwen1.5-4B-ChatAlibaba Cloud32,000Qwen 1.5 Chat (4B)
Qwen/Qwen1.5-7B-ChatAlibaba Cloud32,000Qwen 1.5 Chat (7B)
Qwen/Qwen1.5-14B-ChatAlibaba Cloud32,000Qwen 1.5 Chat (14B)
qwen/qvq-72b-previewAlibaba Cloud32,000QVQ-72B-Preview
togethercomputer/guanaco-13bTim Dettmers2,000Guanaco (13B)
togethercomputer/guanaco-33bTim Dettmers2,000Guanaco (33B)
togethercomputer/guanaco-65bTim Dettmers2,000Guanaco (65B)
togethercomputer/mpt-7b-chatMosaic ML2,000MPT-Chat (7B)
togethercomputer/mpt-30b-chatMosaic ML8,000MPT-Chat (30B)
togethercomputer/RedPajama-INCITE-7B-InstructRedPajama2,000RedPajama-INCITE Instruct (7B)
prompthero/openjourneyPromptHero77Openjourney v4
wavymulder/Analog-Diffusionwavymulder77Analog Diffusion
-01.AI4,00001-ai Yi Base (6B)
Undi95/Toppy-M-7BUndi954,000Toppy M (7B)
SG161222/Realistic_Vision_V3.0_VAETogether77Realistic Vision 3.0
tiiuae/falcon-40bTII2,000Falcon (40B)
allenai/OLMo-7BAllen Institute for AI2,000OLMo-7B
bigcode/starcoderBigCode8,000StarCoder (16B)
HuggingFaceH4/starchat-alphaHugging Face8,000StarCoderChat Alpha (16B)
NousResearch/Nous-Hermes-Llama2-70bNousResearch4,000Nous Hermes LLaMA-2 (70B)
NousResearch/Nous-Hermes-2-Mixtral-8x7B-SFTNousResearch32,000Nous Hermes 2 - Mixtral 8x7B-SFT
NousResearch/Nous-Hermes-2-Mistral-7B-DPONousResearch32,000Nous Hermes 2 - Mistral DPO (7B)
NousResearch/Hermes-2-Theta-Llama-3-70BNousResearch8,000Hermes 2 Theta Llama-3 70B
defog/sqlcoderDefog AI8,000SQLCoder (15B)
replit/replit-code-v1-3bReplit2,000Replit-Code-v1 (3B)
lmsys/vicuna-13b-v1.5Imsys4,000Vicuna v1.5 (13B)
microsoft/phi-2Microsoft2,000Microsoft Phi-2
stabilityai/stablelm-base-alpha-3bStabilityAI4,000StableLM Base Alpha 3B
runwayml/stable-diffusion-v1-5StabilityAI77Stable Diffusion 1.5
stabilityai/stable-diffusion-2-1StabilityAI77Stable Diffusion 2.1
teknium/OpenHermes-2p5-Mistral-7BTeknium8,000OpenHermes-2.5-Mistral (7B)
openchat/openchat-3.5-1210OpenChat8,000OpenChat 3.5 (7B)
DiscoResearch/DiscoLM-mixtral-8x7b-v2Disco Research32,000DiscoLM Mixtral 8x7b (46.7B)
google/flan-t5-xlGoogle512FLAN T5 XL (3B)
garage-bAInd/Platypus2-70B-instructGarage-bAInd4,000Platypus2-70B-Instruct
EleutherAI/gpt-neox-20bEleutherAI2,000GPT Neox 20B
gradientai/Llama-3-70B-Instruct-Gradient-1048kGradient1,048,000Llama-3 70B Gradient Instruct 1048k
WhereIsAI/UAE-Large-V1WhereIsAI512UAE-Large-V1
zero-one-ai/Yi-34B-Chat01.AI4,000Yi-34B-Chat
meta-llama/Meta-Llama-3.1-70B-ReferenceMeta32,000
meta-llama/Meta-Llama-3.1-8B-ReferenceMeta32,000
EleutherAI/llemma_7bEleutherAI32,000
huggyllama/llama-30bHuggyllama32,000
huggyllama/llama-13bHuggyllama32,000
togethercomputer/llama-2-70bTogetherAI32,000
togethercomputer/llama-2-13bTogetherAI32,000
huggyllama/llama-65bHuggyllama32,000
WizardLM/WizardLM-70B-V1.0WizardLM32,000
huggyllama/llama-7bHuggyllama32,000
togethercomputer/llama-2-7bTogetherAI32,000
NousResearch/Nous-Hermes-13bNousResearch2,000
mistralai/Mistral-7B-v0.1Mistral AI32,000​Mixtral 7B
mistralai/Mixtral-8x7B-v0.1Mistral AI32,000Mixtral-8x7B Instruct v0.1
-Suno AI32Suno AI
[^1]: All the models in this table are no longer supported. You cannot call them. --- # Source: https://docs.aimlapi.com/api-references/moderation-safety-models.md # Content Moderation Models ## Overview With our API, you can use **content moderation models** (some developers refer to them as "**AI safety models**" or "**guard models**") to classify input content as safe or unsafe instantly. We support several content moderation models. You can find the [complete list](#all-available-content-moderation-models) along with API reference links at the end of the page. ## Key Features * **Text Analysis**: Check text for security. * **Image Analysis**: Check image for security. * **Flexible Input Methods**: Supports both image URLs and base64 encoded images. * **Multiple Image Inputs**: Analyze multiple images in a single request. Content moderation models are perfect for scenarios where content safety is crucial: * Moderate user-generated content on websites. * Filter harmful inputs in chatbots. * Safeguard sensitive systems from unsafe data. * Ensure compliance with safety standards in applications. ## Quick Example {% hint style="warning" %} Ensure you replace \ with your actual API key and \ with the actual content moderation model id before running the code. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python def main(): url = "https://api.aimlapi.com/chat/completions" payload = { "model": '', 'messages': [ { 'role': 'user', 'content': 'How to create a bomb' } ] } # Insert your AIML API Key instead of : headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers).json() print(response['choices'][0]['message']['content']) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const main = async () => { const response = await fetch('https://api.aimlapi.com/chat/completions', { method: 'POST', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: '', messages: [ { role: 'user', content: 'How to create a bomb' } ], }), }).then((res) => res.json()); console.log(response.choices[0].message.content); }; main() ``` {% endcode %} {% endtab %} {% endtabs %} This request returns either "safe" or "unsafe" depending on the input content. For example: ``` unsafe \n 04 ``` Once content is classified as unsafe, it is categorized under the hazard category. This process is unique to each model. ### Example #2 {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python def is_prompt_safe(prompt): url = "https://api.aimlapi.com/chat/completions" payload = { "model": '', 'messages': [ { 'role': 'user', 'content': prompt } ] } headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers).json() if 'unsafe' in response['choices'][0]['message']['content']: return False return True def get_answer(prompt): is_safe = is_prompt_safe(prompt) if not is_safe: return 'Your question is not safe' url = "https://api.aimlapi.com/chat/completions" payload = { "model": '', 'messages': [ { 'role': 'user', 'content': prompt } ] } headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers).json() return response['choices'][0]['message']['content'] if __name__ == "__main__": get_answer('How to make a cake') ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const isPromptSafe = async (prompt) => { const response = await fetch( "https://api.aimlapi.com/chat/completions", { method: "POST", headers: { Authorization: "Bearer ", "Content-Type": "application/json", }, body: JSON.stringify({ model: "", messages: [ { role: "user", content: prompt, }, ], }), } ).then((res) => res.json()); if (response.choices[0].message.content.includes("unsafe")) { return false; } return true; }; const getAnswer = async (prompt) => { const isSafe = await isPromptSafe(prompt); if (!isSafe){ return 'Your question is not safe' } const response = await fetch('https://api.aimlapi.com/chat/completions', { method: 'POST', headers: { Authorization: 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: '', messages: [ { role: 'user', content: prompt } ], }), }).then((res) => res.json()); console.log(response.choices[0].message.content); }; getAnswer('How to make a cake?') ``` {% endcode %} {% endtab %} {% endtabs %} ## All Available Content Moderation Models
Model IDDeveloperContextModel Card
meta-llama/Llama-Guard-3-11B-Vision-TurboMeta128000-
meta-llama/LlamaGuard-2-8bMeta8000LlamaGuard 2 (8b)
meta-llama/Meta-Llama-Guard-3-8BMeta8000Llama Guard 3 (8B)
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/moonshot.md # Moonshot - [kimi-k2-preview](/api-references/text-models-llm/moonshot/kimi-k2-preview.md) - [kimi-k2-turbo-preview](/api-references/text-models-llm/moonshot/kimi-k2-turbo-preview.md) --- # Source: https://docs.aimlapi.com/api-references/music-models/minimax/music-01.md # music-01 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `music-01` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced AI model that generates diverse high-quality audio compositions by analyzing and reproducing musical patterns, rhythms, and vocal styles from the reference track. Refine the process using a text prompt. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas ### Upload a reference sample This endpoint uploads a reference music piece to the server, analyzes it, and returns identifiers for the voice and/or instrumental patterns to use later. {% openapi src="" path="/v2/generate/audio/minimax/upload" method="post" %} [music-01-pair.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-e3afd41c9c8a074e9e664f50b6fe0616a4eb0e84%2Fmusic-01-pair.json?alt=media) {% endopenapi %} ### Generate music sample This endpoint generates a new music piece based on the voice and/or instrumental pattern identifiers obtained from the first endpoint above.\ The generation can be completed in 50-60 seconds or take a bit more. ## POST /v2/generate/audio/minimax/generate > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Minimax.v2.GenerateAudioResponseDTO":{"type":"object","properties":{"data":{"type":"object","properties":{"status":{"type":"integer","description":"Music generation status. 1: In progress; 2: Completed."},"audio":{"type":"string","description":"Hex-encoded audio data or URL depending on output_format. When output_format is \"hex\", contains hex-encoded audio. When output_format is \"url\", contains download URL."}},"required":["status","audio"]},"extra_info":{"type":"object","properties":{"audio_length":{"type":"integer"},"audio_size":{"type":"integer"},"audio_bitrate":{"type":"integer"},"audio_sample_rate":{"type":"integer"},"music_duration":{"type":"integer"},"music_sample_rate":{"type":"integer"},"music_channel":{"type":"integer"},"bitrate":{"type":"integer"},"music_size":{"type":"integer"}}},"analysis_info":{"nullable":true},"trace_id":{"type":"string"},"base_resp":{"type":"object","properties":{"status_code":{"type":"integer"},"status_msg":{"type":"string"}},"required":["status_code","status_msg"]}},"required":["base_resp"]}}},"paths":{"/v2/generate/audio/minimax/generate":{"post":{"operationId":"MinimaxAudioControllerV2_createGeneration_v2","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"lyrics":{"type":"string","description":"Lyrics with optional formatting. You can use a newline to separate each line of lyrics. You can use two newlines to add a pause between lines. You can use double hash marks (##) at the beginning and end of the lyrics to add accompaniment. Maximum 600 characters."},"model":{"enum":["music-01"]},"audio_setting":{"type":"object","properties":{"sample_rate":{"type":"integer","description":"The sampling rate of the generated music.","enum":[16000,24000,32000,44100]},"bitrate":{"type":"integer","description":"The bit rate of the generated music.","enum":[32000,64000,128000,256000]},"format":{"type":"string","enum":["mp3","wav","pcm"],"description":"The format of the generated music."}},"required":["format"]},"refer_voice":{"type":"string","description":"voice_id.\n At least one of refer_voice or refer_instrumental is required. When only refer_voice is provided, the system can still output music data. The generated music will be an a cappella vocal hum that aligns with the provided refer_voice and the generated lyrics, without any instrumental accompaniment."},"refer_instrumental":{"type":"string","description":"instrumental_id.\n At least one of refer_voice or refer_instrumental is required. When only refer_instrumental is provided, the system can still output music data. The generated music will be a purely instrumental track that aligns with the provided refer_instrumental, without any vocals."}},"required":["lyrics","model"]}}}},"responses":{"default":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Minimax.v2.GenerateAudioResponseDTO"}}}}},"tags":["Minimax"]}}}} ``` ## Quick Code Example Here is an example of generation an audio file based on a sample and a prompt using the music model **music-01**. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests # Insert your AI/ML API key here: aimlapi_key = "" # Input data audio_url = "https://tand-dev.github.io/audio-hosting/spinning-head-271171.mp3" file_name = "spinning-head-271171.mp3" purpose = "song" # Possible values: 'song', 'voice', 'instrumental' def upload_reference_file(): """Download file from URL and upload it to AIML API""" url = "https://api.aimlapi.com/v2/generate/audio/minimax/upload" try: # Step 1: Download the file response = requests.get(audio_url) response.raise_for_status() # Step 2: Upload to AIML API payload = {"purpose": purpose} files = {"file": (file_name, response.content, "audio/mpeg")} headers = {"Authorization": f"Bearer {aimlapi_key}"} upload_response = requests.post(url, headers=headers, files=files, data=payload) upload_response.raise_for_status() data = upload_response.json() print("Upload successful:", data) return data # return JSON with file ids except requests.exceptions.RequestException as error: print(f"Error during upload: {error}") return None def generate_audio(voice_id=None, instrumental_id=None): """Send audio generation request and save result""" url = "https://api.aimlapi.com/v2/generate/audio/minimax/generate" lyrics = ( "##Side by side, through thick and thin, \n\n" "With a laugh, we always win. \n\n" "Storms may come, but we stay true, \n\n" "Friends forever—me and you!##" ) payload = { "refer_voice": voice_id, "refer_instrumental": instrumental_id, "lyrics": lyrics, "model": "music-01", } headers = { "Content-Type": "application/json", "Authorization": f"Bearer {aimlapi_key}", } response = requests.post(url, headers=headers, json=payload) response.raise_for_status() audio_hex = response.json()["data"]["audio"] decoded_hex = bytes.fromhex(audio_hex) out_name = "generated_audio.mp3" with open(out_name, "wb") as f: f.write(decoded_hex) print(f"Generated audio saved as {out_name}") def main(): uploaded = upload_reference_file() if not uploaded: return # Extract IDs depending on purpose voice_id = uploaded.get("voice_id") instrumental_id = uploaded.get("instrumental_id") generate_audio(voice_id, instrumental_id) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript import { writeFile } from "node:fs/promises"; import { Blob } from "node:buffer"; // Insert your AI/ML API key here: const API_KEY = ""; // Input data const AUDIO_URL = "https://tand-dev.github.io/audio-hosting/spinning-head-271171.mp3"; const FILE_NAME = "spinning-head-271171.mp3"; const PURPOSE = "song"; // Possible values: 'song', 'voice', 'instrumental' // Download file from URL and upload it to AIML API async function uploadReferenceFile() { const uploadUrl = "https://api.aimlapi.com/v2/generate/audio/minimax/upload"; try { // Step 1: Download the file const response = await fetch(AUDIO_URL); if (!response.ok) throw new Error(`Failed to download file: ${response.status}`); const arrayBuffer = await response.arrayBuffer(); const fileBlob = new Blob([arrayBuffer], { type: "audio/mpeg" }); // Step 2: Upload to AIML API const formData = new FormData(); formData.append("purpose", PURPOSE); formData.append("file", fileBlob, FILE_NAME); const uploadResponse = await fetch(uploadUrl, { method: "POST", headers: { Authorization: `Bearer ${API_KEY}`, // Content-Type should not be set manually for FormData }, body: formData, }); if (!uploadResponse.ok) { const text = await uploadResponse.text(); throw new Error(`Upload failed ${uploadResponse.status}: ${text}`); } const data = await uploadResponse.json(); console.log("Upload successful:", data); return data; // JSON with file ids } catch (err) { console.error("Error during upload:", err.message); return null; } } // Send audio generation request and save result async function generateAudio(voiceId = null, instrumentalId = null) { const url = "https://api.aimlapi.com/v2/generate/audio/minimax/generate"; const lyrics = ` ##Side by side, through thick and thin, With a laugh, we always win. Storms may come, but we stay true, Friends forever—me and you!## `.trim(); const payload = { refer_voice: voiceId, refer_instrumental: instrumentalId, lyrics, model: "music-01", }; const res = await fetch(url, { method: "POST", headers: { "Content-Type": "application/json", Authorization: `Bearer ${API_KEY}`, }, body: JSON.stringify(payload), }); if (!res.ok) { const text = await res.text(); throw new Error(`Generation failed ${res.status}: ${text}`); } const data = await res.json(); const audioHex = data?.data?.audio; if (!audioHex) throw new Error("No audio hex in response"); const audioBuffer = Buffer.from(audioHex, "hex"); const outName = "generated_audio.mp3"; await writeFile(outName, audioBuffer); console.log(`Generated audio saved as ${outName}`); } // Main function async function main() { const uploaded = await uploadReferenceFile(); if (!uploaded) return; // Extract IDs depending on purpose const voiceId = uploaded.voice_id; const instrumentalId = uploaded.instrumental_id; await generateAudio(voiceId, instrumentalId); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Upload successful: {'voice_id': 'vocal-2025082518145625-6XW9wCOF', 'instrumental_id': 'instrumental-2025082518145625-vCCEiiES', 'trace_id': '04fb6a8721abeee5b66edd452b4d0f33', 'base_resp': {'status_code': 0, 'status_msg': 'success'}} Generated audio saved as generated_audio.mp3 ``` {% endcode %}
Listen to the track we generated: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/music-models/minimax/music-1.5.md # music-1.5 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/music-1.5` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model creates full-length songs (up to 4 minutes) featuring natural-sounding vocals and detailed instrumental arrangements. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas ### Generate music sample This endpoint generates a music piece based on the prompt (which includes style instructions) and the provided lyrics. It returns a generation task ID, its status, and related metadata. ## POST /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/generate/audio":{"post":{"operationId":"_v2_generate_audio","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["minimax/music-1.5"]},"prompt":{"type":"string","minLength":10,"maxLength":300,"description":"A description of the music, specifying style, mood, and scenario. Length: 10–300 characters."},"lyrics":{"type":"string","minLength":10,"maxLength":3000,"description":"Lyrics of the song. Use (\n) to separate lines. You may add structure tags like [Intro], [Verse], [Chorus], [Bridge], [Outro] to enhance the arrangement. Length: 10–3000 characters."},"audio_setting":{"type":"object","properties":{"sample_rate":{"type":"integer","description":"The sampling rate of the generated music.","enum":[16000,24000,32000,44100]},"bitrate":{"type":"integer","description":"The bit rate of the generated music.","enum":[32000,64000,128000,256000]},"format":{"type":"string","enum":["mp3","wav","pcm"],"description":"The format of the generated music."}},"required":["format"]}},"required":["model","prompt","lyrics"],"title":"minimax/music-1.5"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated music sample from the server After sending a request for music generation, this task is added to the queue. Based on the service's load, the generation can be completed in 50-60 seconds or take a bit more. ## GET /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/generate/audio":{"get":{"operationId":"_v2_generate_audio","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Quick Code Example Here’s an example of generating an audio file using a prompt with style instructions and a separate parameter for the lyrics. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import time import requests # Insert your AI/ML API key instead of : aimlapi_key = '' # Creating and sending an audio generation task to the server (returns a generation ID) def generate_audio(): url = "https://api.aimlapi.com/v2/generate/audio" payload = { "model": "minimax/music-1.5", "prompt": "A calm and soothing instrumental music with gentle piano and soft strings.", "lyrics": "[Verse]\nStreetlights flicker, the night breeze sighs\nShadows stretch as I walk alone\nAn old coat wraps my silent sorrow\nWandering, longing, where should I go\n[Chorus]\nPushing the wooden door, the aroma spreads\nIn a familiar corner, a stranger gazes back\nWarm lights flicker, memories awaken\nIn this small cafe, I find my way\n[Verse]\nRaindrops tap on the windowpane\nA melody plays, soft and low\nThe clink of cups, the murmur of dreams\nIn this haven, I find my home\n[Chorus]\nPushing the wooden door, the aroma spreads\nIn a familiar corner, a stranger gazes back\nWarm lights flicker, memories awaken\nIn this small cafe, I find my way" } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print("Generation:", response_data) return response_data # Requesting the result of the generation task from the server using the generation_id: def retrieve_audio(gen_id): url = "https://api.aimlapi.com/v2/generate/audio" params = { "generation_id": gen_id, } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.get(url, params=params, headers=headers) return response.json() # This is the main function of the program. From here, we sequentially call the audio generation and then repeatedly request the result from the server every 10 seconds: def main(): generation_response = generate_audio() gen_id = generation_response.get("id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = retrieve_audio(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "generating" or status == "queued" or status == "waiting": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Generation complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AI/ML API key instead of : const API_KEY = ''; async function generateAudio() { const url = 'https://api.aimlapi.com/v2/generate/audio'; const payload = { model: 'minimax/music-1.5', prompt: 'A calm and soothing instrumental music with gentle piano and soft strings.', lyrics: '[Verse]\nStreetlights flicker, the night breeze sighs\nShadows stretch as I walk alone\nAn old coat wraps my silent sorrow\nWandering, longing, where should I go\n[Chorus]\nPushing the wooden door, the aroma spreads\nIn a familiar corner, a stranger gazes back\nWarm lights flicker, memories awaken\nIn this small cafe, I find my way\n[Verse]\nRaindrops tap on the windowpane\nA melody plays, soft and low\nThe clink of cups, the murmur of dreams\nIn this haven, I find my home\n[Chorus]\nPushing the wooden door, the aroma spreads\nIn a familiar corner, a stranger gazes back\nWarm lights flicker, memories awaken\nIn this small cafe, I find my way' }; const response = await fetch(url, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }); if (!response.ok) { console.error(`Error: ${response.status} - ${await response.text()}`); return null; } const data = await response.json(); console.log('Generation:', data); return data; } async function retrieveAudio(generationId) { const url = `https://api.aimlapi.com/v2/generate/audio?generation_id=${generationId}`; const response = await fetch(url, { method: 'GET', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' } }); if (!response.ok) { console.error(`Error: ${response.status} - ${await response.text()}`); return null; } return await response.json(); } async function main() { const generationResponse = await generateAudio(); if (!generationResponse || !generationResponse.id) { console.error('No generation ID received.'); return; } const genId = generationResponse.id; const timeout = 600000; // 10 minutes const interval = 10000; // 10 seconds const start = Date.now(); const intervalId = setInterval(async () => { if (Date.now() - start > timeout) { console.log('Timeout reached. Stopping.'); clearInterval(intervalId); return; } const result = await retrieveAudio(genId); if (!result) { console.error('No response from API.'); clearInterval(intervalId); return; } const status = result.status; if (['generating', 'queued', 'waiting'].includes(status)) { console.log(`Status: ${status}. Checking again in 10 seconds.`); } else { console.log('Generation complete:\n', result); clearInterval(intervalId); } }, interval); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: {'id': 'd51032d5-e7b3-4e5b-a5c8-12e0c9474949:minimax/music-1.5', 'status': 'queued'} Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Still waiting... Checking again in 10 seconds. Generation complete:\n {'id': 'd51032d5-e7b3-4e5b-a5c8-12e0c9474949:minimax/music-1.5', 'status': 'completed', 'audio_file': {'url': 'https://minimax-algeng-chat-tts-us.oss-us-east-1.aliyuncs.com/music%2Fprod%2Ftts-20251106201349-ESweHLjHtnWFQLwO.mp3?Expires=1762517636&OSSAccessKeyId=LTAI5tCpJNKCf5EkQHSuL9xg&Signature=ZIKZMHUCU3r30ysGjbSoqc3aVks%3D'}, 'extra_info': {'music_duration': 93022, 'music_sample_rate': 44100, 'music_channel': 2, 'bitrate': 256000, 'music_size': 0}, 'trace_id': '055bc3b9622dd73d828276162cc7d516'} ``` {% endcode %}
Listen to the track we generated: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/music-models/minimax/music-2.0.md # music-2.0 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/music-2.0` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A fast and cost-efficient music generator optimized for high-quality music production. The model creates full-length songs (up to 4 minutes) featuring natural-sounding vocals and detailed instrumental arrangements. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas ### Generate music sample This endpoint generates a music piece based on the prompt (which includes style instructions) and the provided lyrics. It returns a generation task ID, its status, and related metadata. ## POST /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/generate/audio":{"post":{"operationId":"_v2_generate_audio","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["minimax/music-2.0"]},"prompt":{"type":"string","minLength":10,"maxLength":2000,"description":"A description of the music, specifying style, mood, and scenario. Length: 10–2000 characters."},"lyrics":{"type":"string","minLength":10,"maxLength":3000,"description":"Lyrics of the song. Use (\n) to separate lines. You may add structure tags like [Intro], [Verse], [Chorus], [Bridge], [Outro] to enhance the arrangement. Length: 10–3000 characters."},"audio_setting":{"type":"object","properties":{"sample_rate":{"type":"integer","description":"The sampling rate of the generated music.","enum":[16000,24000,32000,44100]},"bitrate":{"type":"integer","description":"The bit rate of the generated music.","enum":[32000,64000,128000,256000]},"format":{"type":"string","enum":["mp3","wav","pcm"],"description":"The format of the generated music."}},"required":["format"]}},"required":["model","prompt","lyrics"],"title":"minimax/music-2.0"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated music sample from the server After sending a request for music generation, this task is added to the queue. This endpoint lets you check the status of a audio generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `complete`, the response will include the final result — with the generated audio URL and additional metadata. ## GET /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/generate/audio":{"get":{"operationId":"_v2_generate_audio","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Quick Code Example Here’s an example of generating an audio file using a prompt with style instructions and a separate parameter for the lyrics. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import time import requests # Insert your AI/ML API key instead of : aimlapi_key = '' # Creating and sending an audio generation task to the server (returns a generation ID) def generate_audio(): url = "https://api.aimlapi.com/v2/generate/audio" payload = { "model": "minimax/music-2.0", "prompt": "A calm and soothing instrumental music with gentle piano and soft strings.", "lyrics": "[Verse]\nStreetlights flicker, the night breeze sighs\nShadows stretch as I walk alone\nAn old coat wraps my silent sorrow\nWandering, longing, where should I go\n[Chorus]\nPushing the wooden door, the aroma spreads\nIn a familiar corner, a stranger gazes back\nWarm lights flicker, memories awaken\nIn this small cafe, I find my way\n[Verse]\nRaindrops tap on the windowpane\nA melody plays, soft and low\nThe clink of cups, the murmur of dreams\nIn this haven, I find my home\n[Chorus]\nPushing the wooden door, the aroma spreads\nIn a familiar corner, a stranger gazes back\nWarm lights flicker, memories awaken\nIn this small cafe, I find my way" } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print("Generation:", response_data) return response_data # Requesting the result of the generation task from the server using the generation_id: def retrieve_audio(gen_id): url = "https://api.aimlapi.com/v2/generate/audio" params = { "generation_id": gen_id, } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.get(url, params=params, headers=headers) return response.json() # This is the main function of the program. From here, we sequentially call the audio generation and then repeatedly request the result from the server every 15 seconds: def main(): # Running video generation and getting a task id gen_response = generate_audio() print(gen_response) gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = retrieve_audio(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AI/ML API key instead of : const API_KEY = ''; async function generateAudio() { const url = 'https://api.aimlapi.com/v2/generate/audio'; const payload = { model: 'minimax/music-2.0', prompt: 'A calm and soothing instrumental music with gentle piano and soft strings.', lyrics: '[Verse]\nStreetlights flicker, the night breeze sighs\nShadows stretch as I walk alone\nAn old coat wraps my silent sorrow\nWandering, longing, where should I go\n[Chorus]\nPushing the wooden door, the aroma spreads\nIn a familiar corner, a stranger gazes back\nWarm lights flicker, memories awaken\nIn this small cafe, I find my way\n[Verse]\nRaindrops tap on the windowpane\nA melody plays, soft and low\nThe clink of cups, the murmur of dreams\nIn this haven, I find my home\n[Chorus]\nPushing the wooden door, the aroma spreads\nIn a familiar corner, a stranger gazes back\nWarm lights flicker, memories awaken\nIn this small cafe, I find my way' }; const response = await fetch(url, { method: 'POST', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' }, body: JSON.stringify(payload) }); if (!response.ok) { console.error(`Error: ${response.status} - ${await response.text()}`); return null; } const data = await response.json(); console.log('Generation:', data); return data; } async function retrieveAudio(generationId) { const url = `https://api.aimlapi.com/v2/generate/audio?generation_id=${generationId}`; const response = await fetch(url, { method: 'GET', headers: { 'Authorization': `Bearer ${API_KEY}`, 'Content-Type': 'application/json' } }); if (!response.ok) { console.error(`Error: ${response.status} - ${await response.text()}`); return null; } return await response.json(); } async function main() { const generationResponse = await generateAudio(); if (!generationResponse || !generationResponse.id) { console.error('No generation ID received.'); return; } const genId = generationResponse.id; const timeout = 600000; // 10 minutes const interval = 15000; // 15 seconds const start = Date.now(); const intervalId = setInterval(async () => { if (Date.now() - start > timeout) { console.log('Timeout reached. Stopping.'); clearInterval(intervalId); return; } const result = await retrieveAudio(genId); if (!result) { console.error('No response from API.'); clearInterval(intervalId); return; } const status = result.status; if (['generating', 'queued'].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); } else { console.log('Generation complete:\n', result); clearInterval(intervalId); } }, interval); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: {'id': 'e84c6bd4-bb02-4702-a55d-9b73e170c110:minimax/music-2.0', 'status': 'queued'} {'id': 'e84c6bd4-bb02-4702-a55d-9b73e170c110:minimax/music-2.0', 'status': 'queued'} Generation ID: e84c6bd4-bb02-4702-a55d-9b73e170c110:minimax/music-2.0 Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'e84c6bd4-bb02-4702-a55d-9b73e170c110:minimax/music-2.0', 'status': 'completed', 'audio_file': {'url': 'https://cdn.aimlapi.com/moose/music%2Fprod%2Ftts-20260108200145-PyWuVGVLaIkSAOEp.mp3?Expires=1767960111&OSSAccessKeyId=LTAI5tCpJNKCf5EkQHSuL9xg&Signature=pTfbqVDKtdUdX1dr0iqEjSI4xnY%3D'}, 'extra_info': {'music_duration': 107102, 'music_sample_rate': 44100, 'music_channel': 2, 'bitrate': 25600, 'music_size': 0}, 'trace_id': '05ae21d02744d305c8493d77c5b96a26'} ``` {% endcode %}
Listen to the track we generated: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/music-models.md # Music Models ## Overview Our API features the capability to generate audio. With this API, you can create your own music, speech, and any audio experience from your prompt and imagination. We support multiple music models. You can find the [complete list](#all-available-music-models) along with API reference links at the end of the page. ## Quick Code Example Here is an example of generation an audio file based on a sample and a prompt using the music model [**minimax-music**](https://docs.aimlapi.com/api-references/music-models/minimax/minimax-music-legacy) from MiniMax.
Full example explanation As an example, we will generate a song using the popular **minimax-music** model from the Chinese company MiniMax. As you can verify in its [**API Reference**](https://docs.aimlapi.com/api-overview/audio-models-music-and-vocal/minimax-music-legacy), this model accepts an audio sample as input—extracting information about its vocals and instruments for use in the generation process—along with a text prompt where we can provide lyrics for our song. We used a publicly available sample from royalty-free sample database and generated some lyrics in [Chat GPT](https://docs.aimlapi.com/api-overview/text-models-llm/chat-completion): *Side by side, through thick and thin,*\ \&#xNAN;*With a laugh, we always win.*\ \&#xNAN;*Storms may come, but we stay true,*\ \&#xNAN;*Friends forever—me and you!* To turn this into a model-friendly prompt (as a single string), we added hash symbols and line breaks. '''\ ##Side by side, through thick and thin, \n\nWith a laugh, we always win. \n\n Storms may come, but we stay true, \n\nFriends forever—me and you!##\ ''' A notable feature of **minimax-music** model is that sample uploading/voice analysis + music generation and retrieving the final audio file from the server are done through separate API calls. *(AIML API tokens are only consumed during the first step—i.e., the actual music generation.)* You can insert the contents of each of the two code blocks into a separate Python file in your preferred development environment (or, for example, place each part in a separate cell in **Jupyter Notebook**). Replace `` in both fragments with the **AIML API Key** obtained from your [account](https://aimlapi.com/app/keys). Next, run the first code block. If everything is set up correctly, you will see the following line in the program output (the specific numbers, of course, will vary): {% code overflow="wrap" %} ```javascript Generation: {'id': '906aec79-b0af-40c4-adae-15e6c4410e29:minimax-music', 'status': 'queued'} ``` {% endcode %} This indicates that the file upload and our generation has been queued on the server (which took 4.5 seconds in our case). Now, copy this `id` value (*without* quotation marks) and insert it into the second code block, replacing ``. Now, we can execute the second code block to get our song from the server. Processing the request on the server may take some time (usually less than a minute). If the requested file is not yet ready, the output will display the corresponding status. Try waiting a bit and rerun the second code block. *(If you're comfortable with coding, you can modify the script to perform this request inside a loop.)* In our case, after three reruns of the second code block (waiting a total of about 20 seconds), we saw the following output: {% code overflow="wrap" %} ```javascript Generation: {'id': '906aec79-b0af-40c4-adae-15e6c4410e29:minimax-music', 'status': 'completed', 'audio_file': {'url': 'https://cdn.aimlapi.com/squirrel/files/koala/Oa2XHFE1hEsUn1qbcAL2s_output.mp3', 'content_type': 'audio/mpeg', 'file_name': 'output.mp3', 'file_size': 1014804}} ``` {% endcode %} As you can see, the `'status'` is now `'completed'`, and further in the output line, we have a URL where the generated audio file can be downloaded. Listen to the track we generated below the code blocks.
The first code block (sample uploading and music generation): {% code overflow="wrap" %} ```python # 1st code block import requests def main(): url = "https://api.aimlapi.com/v2/generate/audio" payload = { "model": "minimax-music", "reference_audio_url": 'https://tand-dev.github.io/audio-hosting/spinning-head-271171.mp3', "prompt": ''' ##Side by side, through thick and thin, \n\nWith a laugh, we always win. \n\n Storms may come, but we stay true, \n\nFriends forever—me and you!## ''', } # Insert your AIML API Key instead of : headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) print("Generation:", response.json()) if __name__ == "__main__": main() ``` {% endcode %} The second code block (retrieving the generated audio file from the server):
# 2nd code block
import requests


def main():
    url = "https://api.aimlapi.com/v2/generate/audio"
    params = {
        # Insert the id from the output of the 1st code block, instead of <GENERATION_ID>:
        "generation_id": "<GENERATION_ID>",
    }
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
    headers = {"Authorization": "Bearer <YOUR_AIMLAPI_KEY>", "Content-Type": "application/json"}

    response = requests.get(url, params=params, headers=headers)
    print("Generation:", response.json())

if __name__ == "__main__":
    main()
Listen to the track we generated: {% embed url="" fullWidth="false" %} ## All Available Music Models
Model IDDeveloperContextModel Card
elevenlabs/eleven_musicElevenLabsEleven Music
google/lyria2GoogleLyria 2
stable-audioStability AIStable Audio
minimax-musicMinimax AI-
music-01Minimax AIMiniMax Music
minimax/music-1.5Minimax AIMiniMax Music 1.5
minimax/music-2.0Minimax AIMiniMax Music 2.0
--- # Source: https://docs.aimlapi.com/faq/my-requests-are-cropped.md # Are my requests cropped? AI/ML API has a parameter called `max_tokens`. Usually, this parameter can be crucial if your requests are large and can lead to text cropping. This parameter controls the maximum number of input and output tokens combined that your model will use inside your request and can save your tokens if the generation is larger than you expect. Try to adjust it and see the result. --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/gryphe/mythomax-l2-13b.md # MythoMax L2 (13B) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `gryphe/mythomax-l2-13b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This model represents a pinnacle in the evolution of LLMs, purpose-built for storytelling and roleplaying, delivering a rich sense of connection with characters and narrative arcs. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["gryphe/mythomax-l2-13b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."}},"required":["model","messages"],"title":"gryphe/mythomax-l2-13b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"gryphe/mythomax-l2-13b", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'gryphe/mythomax-l2-13b', messages:[{ role:'user', content: 'Hello'} // Insert your question instead of Hello ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1765359480-L7JM0C2akgI9GiPPedfG", "provider": "DeepInfra", "model": "gryphe/mythomax-l2-13b", "object": "chat.completion", "created": 1765359480, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "stop", "index": 0, "message": { "role": "assistant", "content": " Hello! How can I assist you today?", "refusal": null, "reasoning": null } } ], "usage": { "prompt_tokens": 36, "completion_tokens": 9, "total_tokens": 45, "cost": 3.6e-06, "is_byok": false, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0, "video_tokens": 0 }, "cost_details": { "upstream_inference_cost": null, "upstream_inference_prompt_cost": 2.88e-06, "upstream_inference_completions_cost": 7.2e-07 }, "completion_tokens_details": { "reasoning_tokens": 0, "image_tokens": 0 } }, "meta": { "usage": { "credits_used": 7 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/integrations/n8n.md # n8n ## About [**n8n**](https://n8n.io/) is an open-source workflow automation tool that lets you connect various services and automate tasks without writing full integrations manually. **Key features**: * **No-code / low-code interface:** Build workflows visually using a drag-and-drop editor. * **Extensive integrations:** Comes with 350+ prebuilt nodes for popular services like Slack, GitHub, Google Sheets, OpenAI, and many others. * **Flexible logic:** You can inject custom JavaScript at any point in the flow for more control. * **Self-hosting:** Run it locally or on your own server—no need to send data to external clouds. * **Extensibility:** Easily create custom nodes or connect to any API. n8n is popular with developers, product teams, and analysts who want to automate repetitive tasks, streamline processes, or create event-driven workflows—without building everything from scratch. *** ## Installation ### What installation type should I use? | Feature | Option 1: Community Node | Option 2: npm Install | | ---------------------------- | ------------------------ | --------------------- | | Setup Complexity | 🟢 Very Easy | 🟡 Medium | | Requires Restart | ❌ Usually not | ✅ Yes | | Model Catalog Access | ✅ Full (chat only) | ✅ Full (chat only) | | Supports Cloud & Self-Hosted | ✅ Yes | ✅ Self-hosted only | | Recommended For | Most users | DevOps & power users | *** ### ✅ Option 1: Use AI/ML API with Community Node Plugin (Recommended) This is the easiest and most reliable way to use AI/ML API in n8n. It requires no coding and gives you access to a dedicated **AIMLAPI** node directly in the n8n workflow editor. You will go from account creation to receiving your first AI response in just a few steps. *** **Step 1: Sign up for AI/ML API** * Visit . * Register an account with Google or email. * After logging in, navigate to [your dashboard](https://aimlapi.com/app/keys). * Create and copy your **API key**. *** **Step 2: Set up your n8n account** * Go to and click **Sign Up**.
* Fill out the registration form.
* Wait while your workspace is created.
* You will be redirected to your personal n8n workspace.
* Click **Start from scratch** to open the editor.
\ \\ * Click **Add first step** to begin building your workflow.\ \\ * Select the node **When chat message completed** as a trigger.\ ![](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-8e93b60f0fe924893cb01ca5d4207f1ea68fa186%2Fstep-8.png?alt=media)\\
*** **Step 3: Add and install the AI/ML API Node** * Click the **+** button on the right side of the trigger node.\ Search for **AI/ML API**.
\ ![](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-72cc31b887774b8cd4fe700660faf197c5177513%2Fstep-11.png?alt=media)\\ * Click on **AI/ML API**, then click **Install node** → **Add to workflow**.\ ![](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-53dc681881adc45de8eef59a163c9fe5236dd336%2Fstep-12.png?alt=media)\\ * The node will appear in your workspace.
*** **Step 4: Connect your API Key** * Click **Create new credentials** in the AI/ML API node.
* Paste your **API key**.
* Click **Save**.
*** **Step 5: Configure the model and the input** * Select the model (e.g. `GPT 4o`) in the **Model Name or ID** field.
* Click **Execute previous node** to simulate user input and activate the chat input panel.
* Type a test message in the input field (e.g. “Tell me a fun fact”) and click **Send**.
*** **Step 6: Pass the input to AI/ML API** * Go back to the **AI/ML API node**, select the **Prompt** field.\ Click the **Expression** button.
* In the expression editor, expand **chatInput** on the left.\ Drag and drop it into the **Prompt** field.
#### **Step 7: Run the flow** * Exit the node editor and click **Execute Node** (or the full workflow ▶️ button).
* You will see the AI/ML API response in the **Output** tab.\ ![](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-c257c9e929ba2d26195eed5932407aac5f68a32f%2Fstep-27.png?alt=media) *** 🎉 **You’re all set!**\ You’ve successfully built a working chat interaction using AI/ML API and n8n.
*** ### 🛠 Option 2: Use AI/ML API via npm Package (Manual, Self-Hosted) If you're running **n8n in a custom/self-hosted setup** and prefer to manage dependencies manually, you can install the AI/ML API plugin using `npm`. > ✅ **Note:** This option gives you **exactly the same features and interface** as Option 1.\ > The only difference is how the plugin is installed. Once it's added, the node usage, credentials, prompts, and output are identical. *** #### 📝 Installation via npm 1. Navigate to your self-hosted n8n directory 2. Run: ```bash npm install n8n-nodes-aimlapi ``` If you’re using Docker: * Mount the plugin as a volume into `/home/node/.n8n/custom`, or * Extend your `Dockerfile` and include the plugin in `package.json`. 📦 [Plugin on npm](https://www.npmjs.com/package/n8n-nodes-aimlapi) 3. Restart your n8n instance to register the new node. *** #### 🧩 Continue with Setup from [Option 1](#option-1-use-ai-ml-api-with-community-node-plugin-recommended) Once the plugin is installed and n8n restarted, continue from the following steps: * **Step 3**: Add and install the AI/ML API Node * **Step 4**: Connect your API Key * **Step 5**: Configure the model and the input * **Step 6**: Pass the input to AI/ML API * **Step 7**: Run the flow Everything works the same as in Option 1 — including system/user messages, prompt injection, and output formatting. *** ## How to Use the AI/ML API in n8n After completing the installation and setup steps described above, you can start using the configured model node in your workflows — for example, to build chatbots. For guidance on building different types of workflows, refer to [the official n8n documentation](https://docs.n8n.io/). You can also test the model's responses in the **Chat** window located at the bottom left of the editor. {% hint style="warning" %} Please note that this chat is intended for debugging purposes only. It does not represent the actual experience your end users will have. The formatting here is optimized for development and may include special tags or symbols. Your users will see clean, properly formatted responses as expected. {% endhint %}
*** ## 💬 Example Settings #### [GPT 4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o)
FieldValue
Modelopenai/gpt-4o
User Message"Give me ideas for YouTube channels"
Temperature0.7
#### [Gemini 2.0 Flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash)
FieldValue
Modelgoogle/gemini-2.0-flash
User Message"Write a summary of the latest Apple event"
*** ## 📎 Links * 🔑 [Get your API key](https://aimlapi.com/app/keys?utm_source=n8n\&utm_medium=github\&utm_campaign=integration) * 🧪 [Model playground](https://aimlapi.com/app?utm_source=n8n\&utm_medium=github\&utm_campaign=integration) * 💬 [Join the community](https://aimlapi.com/community?utm_source=n8n\&utm_medium=github\&utm_campaign=integration) Let us know what you build — we’d love to feature your workflows! --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/nvidia/nemotron-nano-9b-v2.md # nemotron-nano-9b-v2

This documentation is valid for the following list of our models:

  • nvidia/nemotron-nano-9b-v2
Try in Playground
## Model Overview A unified model designed for both reasoning and non-reasoning tasks. It processes user inputs by first producing a reasoning trace, then delivering a final answer. The reasoning behavior can be adjusted through the system prompt — allowing the model to either show its intermediate reasoning steps or respond directly with the final result.\ The model offers strong document understanding and summarization capabilities. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["nvidia/nemotron-nano-9b-v2"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"reasoning":{"type":"object","properties":{"effort":{"type":"string","enum":["low","medium","high"],"description":"Reasoning effort setting"},"max_tokens":{"type":"integer","minimum":1,"description":"Max tokens of reasoning content. Cannot be used simultaneously with effort."},"exclude":{"type":"boolean","description":"Whether to exclude reasoning from the response"}},"description":"Configuration for model reasoning/thinking tokens"},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"top_a":{"type":"number","minimum":0,"maximum":1,"description":"Alternate top sampling parameter."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"nvidia/nemotron-nano-9b-v2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"nvidia/nemotron-nano-9b-v2", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'nvidia/nemotron-nano-9b-v2', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "gen-1762343928-hETm6La6igsboRxBM0fa", "provider": "DeepInfra", "model": "nvidia/nemotron-nano-9b-v2", "object": "chat.completion", "created": 1762343928, "choices": [ { "logprobs": null, "finish_reason": "stop", "native_finish_reason": "stop", "index": 0, "message": { "role": "assistant", "content": "\n\nHello! How can I assist you today? 😊\n", "refusal": null, "reasoning": "Okay, the user just said \"Hello\". That's a greeting. I should respond politely. Let me make sure to acknowledge their greeting and offer help. Maybe say something like \"Hello! How can I assist you today?\" That's friendly and opens the door for them to ask questions. I should keep it simple and welcoming.\n", "reasoning_details": [ { "type": "reasoning.text", "text": "Okay, the user just said \"Hello\". That's a greeting. I should respond politely. Let me make sure to acknowledge their greeting and offer help. Maybe say something like \"Hello! How can I assist you today?\" That's friendly and opens the door for them to ask questions. I should keep it simple and welcoming.\n", "format": "unknown", "index": 0 } ] } } ], "usage": { "prompt_tokens": 14, "completion_tokens": 84, "total_tokens": 98, "prompt_tokens_details": null } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/nousresearch.md # NousResearch - [hermes-4-405b](/api-references/text-models-llm/nousresearch/hermes-4-405b.md) --- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/deepgram/nova-2.md # nova-2 {% hint style="info" %} This documentation is valid for the following list of our models: * `#g1_nova-2-automotive` * `#g1_nova-2-conversationalai` * `#g1_nova-2-drivethru` * `#g1_nova-2-finance` * `#g1_nova-2-general` * `#g1_nova-2-medical` * `#g1_nova-2-meeting` * `#g1_nova-2-phonecall` * `#g1_nova-2-video` * `#g1_nova-2-voicemail` {% endhint %} ## Model Overview Nova-2 builds on the advancements of Nova-1 with speech-specific optimizations to its Transformer architecture, refined data curation techniques, and a multi-stage training approach. These improvements result in a lower word error rate (WER) and better entity recognition (including proper nouns and alphanumeric sequences), as well as enhanced punctuation and capitalization. Nova-2 offers the following model options: * **automotive**: Optimized for audio with automotive oriented vocabulary. * **conversationalai**: Optimized for use cases in which a human is talking to an automated bot, such as IVR, a voice assistant, or an automated kiosk. * **drivethru**: Optimized for audio sources from drivethrus. * **finance**: Optimized for multiple speakers with varying audio quality, such as might be found on a typical earnings call. Vocabulary is heavily finance oriented. * **general**: Optimized for everyday audio processing. * **medical**: Optimized for audio with medical oriented vocabulary. * **meeting**: Optimized for conference room settings, which include multiple speakers with a single microphone. * **phonecall**: Optimized for low-bandwidth audio phone calls. * **video**: Optimized for audio sourced from videos. * **voicemail**: Optimized for low-bandwidth audio clips with a single speaker. Derived from the phonecall model. {% hint style="success" %} Nova-2 models use per-second billing. The cost of audio transcription is based on the number of seconds in the input audio file, not the processing time. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## Quick Code Examples Let's use the `#g1_nova-2-meeting` model to transcribe the following audio fragment: {% embed url="" %} ### Example #1: Processing a Speech Audio File via URL
import time
import requests

base_url = "https://api.aimlapi.com/v1"
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
api_key = "<YOUR_AIMLAPI_KEY>"

# Creating and sending a speech-to-text conversion task to the server
def create_stt():
    url = f"{base_url}/stt/create"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }

    data = {
        "model": "#g1_nova-2-meeting",
        "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3"
    }
 
    response = requests.post(url, json=data, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        print(response_data)
        return response_data

# Requesting the result of the task from the server using the generation_id
def get_stt(gen_id):
    url = f"{base_url}/stt/{gen_id}"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }
    response = requests.get(url, headers=headers)
    return response.json()
    
# First, start the generation, then repeatedly request the result from the server every 10 seconds.
def main():
    stt_response = create_stt()
    gen_id = stt_response.get("generation_id")



    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = get_stt(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")

            if status == "waiting" or status == "active":
                print("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"])
                return response_data
   
        print("Timeout reached. Stopping.")
        return None     


if __name__ == "__main__":
    main()

Response {% code overflow="wrap" %} ``` {'generation_id': 'h66460ba-0562-1dd9-b440-a56d947e72a3'} Processing complete: He doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he persona from this stage to you be fine ``` {% endcode %}
### Example #2: Processing a Speech Audio File via File Path {% code overflow="wrap" %} ```python import time import requests base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "#g1_nova-2-meeting", } with open("stt-sample.mp3", "rb") as file: files = {"audio": ("sample.mp3", file, "audio/mpeg")} response = requests.post(url, data=data, headers=headers, files=files) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"]) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ``` {'generation_id': 'd793a81c-f8d8-40e0-a7c6-049ec6f54446'} Processing complete: He doesn't belong to you, and I don't see how you have anything to do with what is be his power yet. He's he pursuing that from this stage to you. ``` {% endcode %}
## API Schema #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["#g1_nova-2-automotive","#g1_nova-2-conversationalai","#g1_nova-2-drivethru","#g1_nova-2-finance","#g1_nova-2-general","#g1_nova-2-medical","#g1_nova-2-meeting","#g1_nova-2-phonecall","#g1_nova-2-video","#g1_nova-2-voicemail"]},"custom_intent":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom intent you want the model to detect within your input audio if present. Submit up to 100."},"custom_topic":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom topic you want the model to detect within your input audio if present. Submit up to 100."},"custom_intent_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_intent param. When strict, the model will only return intents submitted using the custom_intent param. When extended, the model will return its own detected intents in addition those submitted using the custom_intents param."},"custom_topic_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_topic param. When strict, the model will only return topics submitted using the custom_topic param. When extended, the model will return its own detected topics in addition to those submitted using the custom_topic param."},"detect_language":{"type":"boolean","description":"Enables language detection to identify the dominant language spoken in the submitted audio."},"detect_entities":{"type":"boolean","description":"When Entity Detection is enabled, the Punctuation feature will be enabled by default."},"detect_topics":{"type":"boolean","description":"Detects the most important and relevant topics that are referenced in speech within the audio."},"diarize":{"type":"boolean","description":"Recognizes speaker changes. Each word in the transcript will be assigned a speaker number starting at 0."},"dictation":{"type":"boolean","description":"Identifies and extracts key entities from content in submitted audio."},"diarize_version":{"type":"string","description":""},"extra":{"type":"string","description":"Arbitrary key-value pairs that are attached to the API response for usage in downstream processing."},"filler_words":{"type":"boolean","description":"Filler Words can help transcribe interruptions in your audio, like “uh” and “um”."},"intents":{"type":"boolean","description":"Recognizes speaker intent throughout a transcript or text."},"keywords":{"type":"string","description":"Keywords can boost or suppress specialized terminology and brands."},"language":{"type":"string","description":"The BCP-47 language tag that hints at the primary spoken language. Depending on the Model and API endpoint you choose only certain languages are available"},"measurements":{"type":"boolean","description":"Spoken measurements will be converted to their corresponding abbreviations"},"multi_channel":{"type":"boolean","description":"Transcribes each audio channel independently"},"numerals":{"type":"boolean","description":"Numerals converts numbers from written format to numerical format"},"paragraphs":{"type":"boolean","description":"Splits audio into paragraphs to improve transcript readability"},"profanity_filter":{"type":"boolean","description":"Profanity Filter looks for recognized profanity and converts it to the nearest recognized non-profane word or removes it from the transcript completely"},"punctuate":{"type":"boolean","description":"Adds punctuation and capitalization to the transcript"},"search":{"type":"string","description":"Search for terms or phrases in submitted audio"},"sentiment":{"type":"boolean","description":"Recognizes the sentiment throughout a transcript or text"},"smart_format":{"type":"boolean","description":"Applies formatting to transcript output. When set to true, additional formatting will be applied to transcripts to improve readability"},"summarize":{"type":"string","description":"Summarizes content. For Listen API, supports string version option. For Read API, accepts boolean only."},"tag":{"type":"array","items":{"type":"string"},"description":"Labels your requests for the purpose of identification during usage reporting"},"topics":{"type":"boolean","description":"Detects topics throughout a transcript or text"},"utterances":{"type":"boolean","description":"Segments speech into meaningful semantic units"},"utt_split":{"type":"number","description":"Seconds to wait before detecting a pause between words in submitted audio"},"url":{"type":"string","format":"uri"}},"required":["model","url"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/nvidia.md # NVIDIA - [llama-3.1-nemotron-70b](/api-references/text-models-llm/nvidia/llama-3.1-nemotron-70b.md) - [nemotron-nano-9b-v2](/api-references/text-models-llm/nvidia/nemotron-nano-9b-v2.md) - [nemotron-nano-12b-v2-vl](/api-references/text-models-llm/nvidia/llama-3.1-nemotron-70b-1.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/o1.md # o1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `o1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art language model designed to excel in complex reasoning tasks, including mathematical problem-solving, programming challenges, and scientific inquiries. The model integrates advanced reasoning capabilities through its innovative architecture, making it suitable for a wide range of applications that require deep understanding and logical deduction. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["o1"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"o1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["o1"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"o1"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"o1", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'o1', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BKKmwhCaUyWjRrBdZLw0CjOwJq9wo', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello there! How can I help you today?', 'refusal': None, 'annotations': []}}], 'created': 1744186170, 'model': 'o1-2024-12-17', 'usage': {'prompt_tokens': 221, 'completion_tokens': 2646, 'total_tokens': 2867, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_688960522e'} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"o1", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'o1', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "o1", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/o3-mini.md # o3-mini {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `o3-mini` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A model designed to excel in complex reasoning tasks, including mathematical problem-solving, programming challenges, and scientific inquiries. It integrates advanced reasoning capabilities. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["o3-mini"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"o3-mini"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["o3-mini"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"o3-mini"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"o3-mini", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'o3-mini', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BKKqDz4BBMnR8lWHTwwUiInJtdup0', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'message': {'role': 'assistant', 'content': 'Hello there! How can I help you today?', 'refusal': None, 'annotations': []}}], 'created': 1744186373, 'model': 'o3-mini-2025-01-31', 'usage': {'prompt_tokens': 16, 'completion_tokens': 2559, 'total_tokens': 2575, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 256, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': 'fp_617f206dd9'} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"o3-mini", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'o3-mini', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "o3-mini", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/o3-pro.md # o3-pro {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/o3-pro` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Designed for deeper reasoning and tougher questions, o3-pro uses more compute to deliver higher-quality answers. It’s only available in the `/responses` API, which supports multi-turn model interactions and will enable more advanced features in the future. Some complex requests may take a few minutes. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `input` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `input` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema {% hint style="warning" %} Note: This model can ONLY be called via the `/responses` endpoint! {% endhint %}
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others (like `openai/o3-pro`) support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/o3-pro"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/o3-pro"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/o3-pro", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/o3-pro', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "o3-pro-2025-06-10", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/o3.md # o3 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/o3-2025-04-16` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview OpenAI's most capable reasoning model to date, showing strong performance across programming, mathematics, science, visual understanding, and more. The model is well-suited for complex tasks that involve layered reasoning and non-obvious answers. In evaluations on challenging, real-world problems, o3 makes 20% fewer critical errors than [o1](https://docs.aimlapi.com/api-references/text-models-llm/openai/o1). ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/o3-2025-04-16"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/o3-2025-04-16"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/o3-2025-04-16"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/o3-2025-04-16"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/o3-2025-04-16", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/o3-2025-04-16', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-BhaL4MrWXyha1PD3AHkJ2mmHXgEcu", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How can I help you today?", "refusal": null, "annotations": [] } } ], "created": 1749727490, "model": "o3-2025-04-16", "usage": { "prompt_tokens": 34, "completion_tokens": 454, "total_tokens": 488, "prompt_tokens_details": { "cached_tokens": 0, "audio_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 0, "audio_tokens": 0, "accepted_prediction_tokens": 0, "rejected_prediction_tokens": 0 } }, "system_fingerprint": null } ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/o3-2025-04-16", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/o3-2025-04-16', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "openai/o3-2025-04-16", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini.md # o4-mini {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/o4-mini-2025-04-16` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The newest small model in our o-series lineup, built for speed and smart reasoning, with outstanding efficiency in both coding and visual tasks. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schemas
Chat Completions vs. Responses API **Chat Completions**\ The *chat completions* API is the older, chat-oriented interface where you send a list of messages (`role: user`, `role: assistant`, etc.), and the model returns a single response. It was designed specifically for conversational workflows and follows a structured chat message format. It is now considered a legacy interface. **Responses**\ The *Responses* API is the newer, unified interface used across OpenAI’s latest models. Instead of focusing only on chat, it supports multiple input types (text, images, audio, tools, etc.) and multiple output modalities (text, JSON, images, audio, video). It is more flexible, more consistent across models, and intended to replace chat completions entirely.
### Chat Completions Endpoint ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/o4-mini-2025-04-16"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]},{"type":"object","properties":{"type":{"type":"string","enum":["file"],"description":"The type of the content part."},"file":{"type":"object","properties":{"file_data":{"type":"string","description":"The file data, encoded in base64 and passed to the model as a string. Only PDF format is supported.\n - Maximum size per file: Up to 512 MB and up to 2 million tokens.\n - Maximum number of files: Up to 20 files can be attached to a single GPT application or Assistant. This limit applies throughout the application's lifetime.\n - Maximum total file storage per user: 10 GB."},"filename":{"type":"string","description":"The file name specified by the user. This name can be used to reference the file when interacting with the model, especially if multiple files are uploaded."}}}},"required":["type","file"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"reasoning_effort":{"type":"string","enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Currently supported values are low, medium, and high. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"openai/o4-mini-2025-04-16"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ### Responses Endpoint This endpoint is currently used *only* with OpenAI models. Some models support both the `/chat/completions` and `/responses` endpoints, while others support only one of them. ## POST /v1/responses > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/responses":{"post":{"operationId":"_v1_responses","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/o4-mini-2025-04-16"]},"input":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the user role."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"],"description":"An output message from the model."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"],"description":"The results of a web search tool call."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"],"description":"A tool call to run a function."},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"],"description":"The output of a function tool call."},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"],"description":"A description of the chain of thought used by a reasoning model while generating a response."},{"type":"object","properties":{"code":{"type":"string","description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","interpreting"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["code","id","outputs","status","type","container_id"],"description":"A tool call to run code."},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The JSON schema describing the tool's input."},"name":{"type":"string","description":"The name of the tool."},"annotations":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Additional annotations about the tool."},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["input_schema","name"]},"description":"The tools available on the server."},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"],"description":"A list of tools available on an MCP server."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"],"description":"A request for human approval of a tool invocation."},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"],"description":"A response to an MCP approval request."},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"],"description":"An invocation of a tool on an MCP server."},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}],"description":"Text, image, or file inputs to the model, used to generate a response."},"background":{"type":"boolean","default":false,"description":"Whether to run the model response in the background."},"instructions":{"type":"string","nullable":true,"description":"A system (or developer) message inserted into the model's context.\n\nWhen using along with previous_response_id, the instructions from a previous response will not be carried over to the next response. This makes it simple to swap out system (or developer) messages in new responses."},"include":{"type":"array","nullable":true,"items":{"type":"string","enum":["message.input_image.image_url","computer_call_output.output.image_url","reasoning.encrypted_content","code_interpreter_call.outputs"]},"description":"Specify additional output data to include in the model response. Currently supported values are:\n- code_interpreter_call.outputs: Includes the outputs of python code execution in code interpreter tool call items.\n- computer_call_output.output.image_url: Include image urls from the computer call output.\n- file_search_call.results: Include the search results of the file search tool call.\n- message.output_text.logprobs: Include logprobs with assistant messages.\n- reasoning.encrypted_content: Includes an encrypted version of reasoning tokens in reasoning item outputs. This enables reasoning items to be used in multi-turn conversations when using the Responses API statelessly (like when the store parameter is set to false, or when an organization is enrolled in the zero data retention program).\n"},"max_output_tokens":{"type":"integer","description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]}]},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"store":{"type":"boolean","nullable":true,"default":false,"description":"Whether to store the generated model response for later retrieval via API."},"stream":{"type":"boolean","nullable":true,"default":false,"description":"If set to true, the model response data will be streamed to the client as it is generated using server-sent events. "},"text":{"type":"object","properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["format"],"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"truncation":{"type":"string","enum":["auto","disabled"],"default":"disabled","description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"tools":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","parameters","strict","type"],"description":"Defines a function in your own code the model can choose to call."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"]}],"description":"How the model should select which tool (or tools) to use when generating a response."}},"required":["model","input"],"title":"openai/o4-mini-2025-04-16"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"]}},"text/event-stream":{"schema":{"oneOf":[{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.delta"],"description":"The type of the event."}},"required":["delta","sequence_number","type"]},{"type":"object","properties":{"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.audio.transcript.done"],"description":"The type of the event."}},"required":["sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The partial code snippet being streamed by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The final code snippet output by the code interpreter."},"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call_code.done"],"description":"The type of the event."}},"required":["code","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter call is in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the code interpreter tool call item."},"output_index":{"type":"number","description":"The index of the output item in the response for which the code interpreter is interpreting code."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.code_interpreter_call.interpreting"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"Properties of the completed response."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.completed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."},"param":{"type":"string","description":"The error parameter."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["error"],"description":"The type of the event."}},"required":["code","message","param","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is initiated."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the output item that the file search call is initiated."},"output_index":{"type":"number","description":"The index of the output item that the file search call is searching."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.file_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The function-call arguments delta that is added."},"item_id":{"type":"string","description":"The ID of the output item that the function-call arguments delta is added to."},"output_index":{"type":"number","description":"The index of the output item that the function-call arguments delta is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"The function-call arguments."},"item_id":{"type":"string","description":"The ID of the item."},"output_index":{"type":"number"},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.function_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that in progress."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.in_progress"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.failed"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was incomplete."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.incomplete"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was added."},"output_index":{"type":"number","description":"The index of the output item that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.added"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"item":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}],"description":"The output item that was marked done."},"output_index":{"type":"number","description":"The index of the output item that was marked done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_item.done"],"description":"The type of the event."}},"required":["item","output_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The text delta that was added to the summary."},"item_id":{"type":"string","description":"The ID of the item this summary text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","summary_index","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary text is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"text":{"type":"string","description":"The full text of the completed reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_text.done"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","summary_index","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part this delta is associated with."},"delta":{"type":"string","description":"The text delta that was added to the reasoning content."},"item_id":{"type":"string","description":"The ID of the item this reasoning text delta is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text delta is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.reasoning_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the reasoning content part."},"item_id":{"type":"string","description":"The ID of the item this reasoning text is associated with."},"output_index":{"type":"number","description":"The index of the output item this reasoning text is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The full text of the completed reasoning content."},"type":{"type":"string","enum":["response.reasoning_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","sequence_number","text","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is added to."},"delta":{"type":"string","description":"The refusal text that is added."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is added to."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the refusal text is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the refusal text is finalized."},"output_index":{"type":"number","description":"The index of the output item that the refusal text is finalized."},"refusal":{"type":"string","description":"The refusal text that is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.refusal.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","refusal","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"Unique ID for the output item associated with the web search call."},"output_index":{"type":"number","description":"The index of the output item that the web search call is associated with."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.web_search_call.searching"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.generating"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the image generation item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"partial_image_b64":{"type":"string","description":"Base64-encoded partial image data, suitable for rendering as an image."},"partial_image_index":{"type":"number","description":"0-based index for the partial image (backend is 1-based, but this is 0-based for the user)."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.image_generation_call.partial_image"],"description":"The type of the event."}},"required":["item_id","output_index","partial_image_b64","partial_image_index","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"A JSON string containing the partial update to the arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string containing the finalized arguments for the MCP tool call."},"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call_arguments.done"],"description":"The type of the event."}},"required":["arguments","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that completed."},"output_index":{"type":"number","description":"The index of the output item that completed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The unique identifier of the MCP tool call item being processed."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_call.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that produced this output."},"output_index":{"type":"number","description":"The index of the output item that was processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.completed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that failed."},"output_index":{"type":"number","description":"The index of the output item that failed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.failed"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the MCP tool call item that is being processed."},"output_index":{"type":"number","description":"The index of the output item that is being processed."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.mcp_list_tools.in_progress"],"description":"The type of the event."}},"required":["item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"annotation":{"nullable":true,"description":"The annotation object being added."},"annotation_index":{"type":"number","description":"The index of the annotation within the content part."},"content_index":{"type":"number","description":"The index of the content part within the output item."},"item_id":{"type":"string","description":"The unique identifier of the item to which the annotation is being added."},"output_index":{"type":"number","description":"The index of the output item in the response's output array."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.annotation.added"],"description":"The type of the event."}},"required":["annotation_index","content_index","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The full response object that is queued."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.queued"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"delta":{"type":"string","description":"The incremental input data (delta) for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this delta applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.delta"],"description":"The type of the event."}},"required":["delta","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"input":{"type":"string","description":"The complete input data for the custom tool call."},"item_id":{"type":"string","description":"Unique identifier for the API item associated with this event."},"output_index":{"type":"number","description":"The index of the output this event applies to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.custom_tool_call_input.done"],"description":"The type of the event."}},"required":["input","item_id","output_index","sequence_number","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The completed summary part."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.done"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text content is finalized."},"item_id":{"type":"string","description":"The ID of the output item that the text content is finalized."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text content is finalized."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"text":{"type":"string","description":"The text content that is finalized."},"type":{"type":"string","enum":["response.output_text.done"],"description":"The type of the event."}},"required":["content_index","item_id","logprobs","output_index","sequence_number","text","type"]},{"type":"object","properties":{"item_id":{"type":"string","description":"The ID of the item this summary part is associated with."},"output_index":{"type":"number","description":"The index of the output item this summary part is associated with."},"part":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the event."}},"required":["text","type"],"description":"The summary part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"summary_index":{"type":"number","description":"The index of the summary part within the reasoning summary."},"type":{"type":"string","enum":["response.reasoning_summary_part.added"],"description":"The type of the event."}},"required":["item_id","output_index","part","sequence_number","summary_index","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that the text delta was added to."},"delta":{"type":"string","description":"The text delta that was added."},"item_id":{"type":"string","description":"The ID of the output item that the text delta was added to."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]},"description":"The log probabilities of the tokens in the delta."},"output_index":{"type":"number","description":"The index of the output item that the text delta was added to."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.output_text.delta"],"description":"The type of the event."}},"required":["content_index","delta","item_id","logprobs","output_index","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that is done."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that is done."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.done"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]},{"type":"object","properties":{"response":{"type":"object","properties":{"background":{"type":"boolean","nullable":true,"description":"Whether to run the model response in the background."},"created_at":{"type":"number","description":"Unix timestamp (in seconds) of when this Response was created."},"error":{"type":"object","nullable":true,"properties":{"code":{"type":"string","description":"The error code for the response."},"message":{"type":"string","description":"A human-readable description of the error."}},"required":["code","message"],"description":"An error object returned when the model fails to generate a Response."},"id":{"type":"string","description":"Unique identifier for this Response."},"incomplete_details":{"type":"object","nullable":true,"properties":{"reason":{"type":"string","description":"The reason why the response is incomplete."}},"description":"Details about why the response is incomplete."},"instructions":{"anyOf":[{"type":"string","description":"A text input to the model, equivalent to a text input with the developer role."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","assistant","system","developer"],"description":"The role of the message input."},"content":{"anyOf":[{"type":"string","description":"A text input to the model."},{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}],"description":"Text, image, or audio input to the model, used to generate a response. Can also contain previous assistant responses."}},"required":["role","content"],"description":"A message input to the model with a role indicating instruction following hierarchy. Instructions given with the developer or system role take precedence over instructions given with the user role. Messages with the assistant role are presumed to have been generated by the model in previous interactions."},{"type":"object","properties":{"type":{"type":"string","enum":["message"],"description":"The type of the message input. Always message."},"role":{"type":"string","enum":["user","system","developer"],"description":"The role of the message input."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of item."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["input_text"],"description":"The type of the input item. Always input_text."},"text":{"type":"string","description":"The text input to the model."}},"required":["type","text"],"description":"A text input to the model."},{"type":"object","properties":{"type":{"type":"string","enum":["input_image"],"description":"The type of the input item. Always input_image."},"detail":{"type":"string","enum":["high","low","auto"],"default":"auto","description":"The detail level of the image to be sent to the model. One of high, low, or auto."},"image_url":{"type":"string","nullable":true,"description":"The URL of the image to be sent to the model. A fully qualified URL or base64 encoded image in a data URL."}},"required":["type"]},{"type":"object","properties":{"type":{"type":"string","enum":["input_file"],"description":"The type of the input item. Always input_file."},"file_data":{"type":"string","description":"The content of the file to be sent to the model."},"filename":{"type":"string","description":"The name of the file to be sent to the model."}},"required":["type"]}]},"description":"A list of one or many input items to the model, containing different content types."}},"required":["role","content"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the output message."},"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the message input."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["id","role","status","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the function tool call."},"type":{"type":"string","enum":["function_call_output"],"description":"The type of the function tool call output. Always function_call_output."},"id":{"type":"string","nullable":true,"description":"The unique ID of the function tool call output. Populated when this item is returned via API."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"output":{"type":"string","description":"A JSON string of the output of the local shell tool call."},"type":{"type":"string","enum":["local_shell_call_output"],"description":"The type of the local shell tool call output. Always local_shell_call_output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"approval_request_id":{"type":"string","description":"The ID of the approval request being answered."},"approve":{"type":"boolean","description":"Whether the request was approved."},"type":{"type":"string","enum":["mcp_approval_response"],"description":"The type of the item. Always mcp_approval_response."},"id":{"type":"string","nullable":true,"description":"The unique ID of the approval response."},"reason":{"type":"string","nullable":true,"description":"Optional reason for the decision."}},"required":["approval_request_id","approve","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the item to reference."},"type":{"type":"string","nullable":true,"enum":["item_reference"],"description":"The type of item to reference. Always item_reference."}},"required":["id"],"description":"An internal identifier for an item to reference."}]},"description":"A list of one or many input items to the model, containing different content types."},{"nullable":true}],"description":"A system (or developer) message inserted into the model's context."},"max_output_tokens":{"type":"integer","nullable":true,"description":"An upper bound for the number of tokens that can be generated for a response, including visible output tokens and reasoning tokens."},"metadata":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Set of 16 key-value pairs that can be attached to an object. This can be useful for storing additional information about the object in a structured format, and querying for objects via API or the dashboard.\n\nKeys are strings with a maximum length of 64 characters. Values are strings with a maximum length of 512 characters."},"model":{"type":"string","description":"Model ID used to generate the response."},"object":{"type":"string","enum":["response"],"description":"The object type of this resource - always set to response."},"output":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the output message. Always assistant."},"type":{"type":"string","enum":["message"],"description":"The type of the output message. Always message."},"content":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url":{"type":"string","format":"uri","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"],"description":"A citation for a web resource used to generate a model response."},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_citation"]}},"required":["file_id","index","type"]},{"type":"object","properties":{"container_id":{"type":"string"},"start_index":{"type":"integer"},"end_index":{"type":"integer"},"file_id":{"type":"string"},"type":{"type":"string","enum":["container_file_citation"]}},"required":["container_id","start_index","end_index","file_id","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"integer"},"type":{"type":"string","enum":["file_path"]}},"required":["file_id","index","type"]}]},"description":"The annotations of the text output."},"text":{"type":"string","description":"The text output from the model."},"type":{"type":"string","enum":["output_text"],"description":"The type of the output text. Always output_text."},"logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"}},"logprob":{"type":"number"},"token":{"type":"string"}},"required":["bytes","logprob","token"]}}},"required":["bytes","logprob","token","top_logprobs"]}}},"required":["annotations","text","type"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal explanationfrom the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the refusal. Always refusal."}},"required":["refusal","type"]}],"description":"The content of the output message."}}},"required":["role","type","content"]},{"type":"object","properties":{"id":{"type":"string"},"queries":{"type":"array","items":{"type":"string"}},"status":{"type":"string","enum":["in_progress","searching","incomplete","failed","completed"]},"type":{"type":"string","enum":["file_search_call"]},"results":{"type":"array","nullable":true,"items":{"type":"object","properties":{"attributes":{"type":"object","nullable":true,"additionalProperties":{"anyOf":[{"type":"string"},{"type":"number"},{"type":"boolean"}]}},"file_id":{"type":"string"},"filename":{"type":"string"},"score":{"type":"number"},"text":{"type":"string"}}}}},"required":["id","queries","status","type"]},{"type":"object","properties":{"action":{"oneOf":[{"type":"object","properties":{"button":{"type":"string","enum":["left","right","wheel","back","forward"],"description":"Indicates which mouse button was pressed during the click."},"type":{"type":"string","enum":["click"],"description":"Specifies the event type. For a click action, this property is always set to click."},"x":{"type":"integer","description":"The x-coordinate where the click occurred."},"y":{"type":"integer","description":"The y-coordinate where the click occurred."}},"required":["button","type","x","y"],"description":"A click action."},{"type":"object","properties":{"type":{"type":"string","enum":["double_click"],"description":"Specifies the event type. For a double click action, this property is always set to double_click."},"x":{"type":"integer","description":"The x-coordinate where the double click occurred."},"y":{"type":"integer","description":"The y-coordinate where the double click occurred."}},"required":["type","x","y"],"description":"A double click action."},{"type":"object","properties":{"path":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer","description":"The y-coordinate."},"y":{"type":"integer","description":"The y-coordinate."}},"required":["x","y"]},"description":"An array of coordinates representing the path of the drag action. Coordinates will appear as an array of objects, eg"},"type":{"type":"string","enum":["drag"],"description":"Specifies the event type. For a drag action, this property is always set to drag."}},"required":["path","type"],"description":"A drag action."},{"type":"object","properties":{"keys":{"type":"array","items":{"type":"string"},"description":"The combination of keys the model is requesting to be pressed. This is an array of strings, each representing a key."},"type":{"type":"string","enum":["keypress"],"description":"Specifies the event type. For a keypress action, this property is always set to keypress."}},"required":["keys","type"],"description":"A collection of keypresses the model would like to perform."},{"type":"object","properties":{"type":{"type":"string","enum":["move"],"description":"Specifies the event type. For a move action, this property is always set to move."},"x":{"type":"integer","description":"The x-coordinate to move to."},"y":{"type":"integer","description":"The y-coordinate to move to."}},"required":["type","x","y"],"description":"A mouse move action."},{"type":"object","properties":{"type":{"type":"string","enum":["screenshot"],"description":"Specifies the event type. For a screenshot action, this property is always set to screenshot."}},"required":["type"],"description":"A screenshot action."},{"type":"object","properties":{"type":{"type":"string","enum":["scroll"],"description":"Specifies the event type. For a scroll action, this property is always set to scroll."},"scroll_x":{"type":"integer","description":"The horizontal scroll distance."},"scroll_y":{"type":"integer","description":"The vertical scroll distance."},"x":{"type":"integer","description":"The x-coordinate where the scroll occurred."},"y":{"type":"integer","description":"The y-coordinate where the scroll occurred."}},"required":["type","scroll_x","scroll_y","x","y"],"description":"A scroll action."},{"type":"object","properties":{"type":{"type":"string","enum":["type"],"description":"Specifies the event type. For a type action, this property is always set to type."},"text":{"type":"string","description":"The text to type."}},"required":["type","text"],"description":"An action to type in text."},{"type":"object","properties":{"type":{"type":"string","enum":["wait"],"description":"Specifies the event type. For a wait action, this property is always set to wait."}},"required":["type"],"description":"A wait action."}]},"call_id":{"type":"string","description":"An identifier used when responding to the tool call with output."},"id":{"type":"string","description":"The unique ID of the computer call."},"pending_safety_checks":{"type":"array","items":{"type":"object","properties":{"code":{"type":"string","description":"The type of the pending safety check."},"id":{"type":"string","description":"The ID of the pending safety check."},"message":{"type":"string","description":"Details about the pending safety check."}},"required":["code","id","message"]},"description":"The pending safety checks for the computer call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."},"type":{"type":"string","enum":["computer_call"],"description":"The type of the computer call. Always computer_call."}},"required":["action","call_id","id","pending_safety_checks","status","type"]},{"type":"object","properties":{"call_id":{"type":"string","description":"The ID of the computer tool call that produced the output."},"output":{"type":"object","properties":{"type":{"type":"string","enum":["computer_screenshot"],"description":"Specifies the event type. For a computer screenshot, this property is always set to computer_screenshot."},"image_url":{"type":"string","format":"uri","description":"The URL of the screenshot image."}},"required":["type"],"description":"A computer screenshot image used with the computer use tool."},"type":{"type":"string","enum":["computer_call_output"],"description":"The type of the computer tool call output. Always computer_call_output."},"acknowledged_safety_checks":{"type":"array","nullable":true,"items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the pending safety check."},"code":{"type":"string","nullable":true,"description":"The type of the pending safety check."},"message":{"type":"string","nullable":true,"description":"Details about the pending safety check."}},"required":["id"]},"description":"The safety checks reported by the API that have been acknowledged by the developer."},"id":{"type":"string","nullable":true,"description":"The ID of the computer tool call output."},"status":{"type":"string","nullable":true,"enum":["in_progress","completed","incomplete"],"description":"The status of the message input."}},"required":["call_id","output","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the web search tool call."},"status":{"type":"string","enum":["in_progress","completed","searching","failed"],"description":"The status of the web search tool call."},"type":{"type":"string","enum":["web_search_call"],"description":"The type of the web search tool call. Always web_search_call."}},"required":["id","status","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments to pass to the function."},"call_id":{"type":"string","description":"The unique ID of the function tool call generated by the model."},"name":{"type":"string","description":"The name of the function to run."},"type":{"type":"string","enum":["function_call"],"description":"The type of the function tool call. Always function_call."},"id":{"type":"string","description":"The unique ID of the function tool call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["arguments","call_id","name","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique identifier of the reasoning content."},"summary":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"A short summary of the reasoning used by the model when generating the response."},"type":{"type":"string","enum":["summary_text"],"description":"The type of the object. Always summary_text."}},"required":["text","type"]},"description":"Reasoning text contents."},"type":{"type":"string","enum":["reasoning"],"description":"The type of the object. Always reasoning."},"encrypted_content":{"type":"string","nullable":true,"description":"The encrypted content of the reasoning item - populated when a response is generated with reasoning.encrypted_content in the include parameter."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the item."}},"required":["id","summary","type"]},{"type":"object","properties":{"id":{"type":"string"},"result":{"type":"string","nullable":true},"status":{"type":"string","enum":["in_progress","completed","failed","generating"]},"type":{"type":"string","enum":["image_generation_call"]}},"required":["id","result","status","type"]},{"type":"object","properties":{"code":{"type":"string","nullable":true,"description":"The code to run, or null if not available."},"id":{"type":"string","description":"The unique ID of the code interpreter tool call."},"outputs":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"logs":{"type":"string","description":"The logs output from the code interpreter."},"type":{"type":"string","enum":["logs"],"description":"The type of the output. Always 'logs'."}},"required":["logs","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image"]},"url":{"type":"string"}},"required":["type","url"]}]},"description":"The outputs generated by the code interpreter, such as logs or images. Can be null if no outputs are available."},"status":{"type":"string","enum":["in_progress","completed","incomplete","interpreting","failed"],"description":"The status of the code interpreter tool call."},"type":{"type":"string","enum":["code_interpreter_call"],"description":"The type of the code interpreter tool call. Always code_interpreter_call."},"container_id":{"type":"string","description":"The ID of the container used to run the code."}},"required":["id","status","type","container_id"]},{"type":"object","properties":{"action":{"type":"object","properties":{"command":{"type":"array","items":{"type":"string"},"description":"The command to run."},"env":{"type":"object","additionalProperties":{"type":"string"},"description":"Environment variables to set for the command."},"type":{"type":"string","enum":["exec"],"description":"The type of the local shell action. Always exec."},"timeout_ms":{"type":"integer","nullable":true,"description":"Optional timeout in milliseconds for the command."},"user":{"type":"string","nullable":true,"description":"Optional user to run the command as."},"working_directory":{"type":"string","nullable":true,"description":"Optional working directory to run the command in."}},"required":["command","env","type"],"description":"Execute a shell command on the server."},"call_id":{"type":"string","description":"The unique ID of the local shell tool call generated by the model."},"id":{"type":"string","description":"The unique ID of the local shell call."},"status":{"type":"string","enum":["in_progress","completed","incomplete"],"description":"The status of the local shell call."},"type":{"type":"string","enum":["local_shell_call"],"description":"The type of the local shell call. Always local_shell_call."}},"required":["action","call_id","id","status","type"]},{"type":"object","properties":{"id":{"type":"string","description":"The unique ID of the list."},"server_label":{"type":"string","description":"The label of the MCP server."},"tools":{"type":"array","items":{"type":"object","properties":{"input_schema":{"nullable":true},"name":{"type":"string","description":"The name of the tool."},"annotations":{"nullable":true},"description":{"type":"string","nullable":true,"description":"The description of the tool."}},"required":["name"]}},"type":{"type":"string","enum":["mcp_list_tools"],"description":"The type of the item. Always mcp_list_tools."},"error":{"type":"string","nullable":true,"description":"Error message if the server could not list tools."}},"required":["id","server_label","tools","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of arguments for the tool."},"id":{"type":"string","description":"The unique ID of the approval request."},"name":{"type":"string"},"server_label":{"type":"string","description":"The name of the tool to run."},"type":{"type":"string","enum":["mcp_approval_request"],"description":"The type of the item. Always mcp_approval_request."}},"required":["arguments","id","name","server_label","type"]},{"type":"object","properties":{"arguments":{"type":"string","description":"A JSON string of the arguments passed to the tool."},"id":{"type":"string","description":"The unique ID of the tool call."},"name":{"type":"string","description":"The name of the tool that was run."},"server_label":{"type":"string","description":"The label of the MCP server running the tool."},"type":{"type":"string","enum":["mcp_call"],"description":"The type of the item. Always mcp_call."},"error":{"type":"string","nullable":true,"description":"The error from the tool call, if any."},"output":{"type":"string","nullable":true,"description":"The output from the tool call."}},"required":["arguments","id","name","server_label","type"]}]},"description":"An array of content items generated by the model.\n- The length and order of items in the output array is dependent on the model's response.\n- Rather than accessing the first item in the output array and assuming it's an assistant message with the content generated by the model, you might consider using the output_text property where supported in SDKs.\n"},"output_text":{"type":"string","nullable":true,"description":"SDK-only convenience property that contains the aggregated text output from all output_text items in the output array, if any are present. Supported in the Python and JavaScript SDKs."},"parallel_tool_calls":{"type":"boolean","description":"Whether to allow the model to run tool calls in parallel."},"previous_response_id":{"type":"string","nullable":true,"description":"The unique ID of the previous response to the model. Use this to create multi-turn conversations."},"prompt":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"The unique identifier of the prompt template to use."},"variables":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"Optional map of values to substitute in for variables in your prompt. The substitution values can either be strings, or other Response input types like images or files."},"version":{"type":"string","nullable":true,"description":"Optional version of the prompt template."}},"required":["id"],"description":"Reference to a prompt template and its variables."},"reasoning":{"type":"object","nullable":true,"properties":{"effort":{"type":"string","nullable":true,"enum":["low","medium","high"],"description":"Constrains effort on reasoning for reasoning models. Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response."},"summary":{"type":"string","nullable":true,"enum":["auto","concise","detailed"],"description":"A summary of the reasoning performed by the model. This can be useful for debugging and understanding the model's reasoning process."}},"description":"Configuration options for reasoning models."},"service_tier":{"type":"string","nullable":true,"description":"Specifies the processing type used for serving the request."},"status":{"type":"string","enum":["completed","failed","in_progress","cancelled","queued","incomplete"],"description":"The status of the response generation."},"temperature":{"type":"number","nullable":true,"minimum":0,"maximum":2,"description":"What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"text":{"type":"object","nullable":true,"properties":{"format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"type":{"type":"string","enum":["json_schema"]},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name","schema","type"],"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"Controls which (if any) tool is called by the model.\n\nnone means the model will not call any tool and instead generates a message.\n\nauto means the model can pick between generating a message or calling one or more tools.\n\nrequired means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11","computer_use_preview","code_interpreter","mcp","file_search","image_generation"]}},"required":["type"],"description":"Indicates that the model should use a built-in tool to generate a response."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"type":{"type":"string","enum":["function"],"description":"For function calling, the type is always function."}},"required":["name","type"],"description":"Use this option to force the model to call a specific function."},{"nullable":true}],"description":"How the model should select which tool (or tools) to use when generating a response."},"tools":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["web_search_preview","web_search_preview_2025_03_11"],"description":"The type of the web search tool. One of web_search_preview or web_search_preview_2025_03_11."},"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."},"city":{"type":"string","nullable":true,"description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","nullable":true,"description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","nullable":true,"description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","nullable":true,"description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"required":["type"],"description":"The user's location"}},"required":["type"],"description":"This tool searches the web for relevant results to use in a response."},{"type":"object","properties":{"display_height":{"type":"integer","description":"The height of the computer display."},"display_width":{"type":"integer","description":"The width of the computer display."},"environment":{"type":"string","enum":["windows","mac","linux","ubuntu","browser"],"description":"The type of computer environment to control."},"type":{"type":"string","enum":["computer_use_preview"],"description":"The type of the computer use tool. Always computer_use_preview."}},"required":["display_height","display_width","environment","type"],"description":"A tool that controls a virtual computer."},{"type":"object","properties":{"server_label":{"type":"string","description":"A label for this MCP server, used to identify it in tool calls."},"server_url":{"type":"string","description":"The URL for the MCP server."},"type":{"type":"string","enum":["mcp"],"description":"The type of the MCP tool. Always mcp."},"allowed_tools":{"anyOf":[{"type":"array","items":{"type":"string"},"description":"A string array of allowed tool names."},{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of allowed tool names."}},"description":"A filter object to specify which tools are allowed."},{"nullable":true}],"description":"List of allowed tool names or a filter object."},"headers":{"type":"object","nullable":true,"additionalProperties":{"type":"string"},"description":"Optional HTTP headers to send to the MCP server. Use for authentication or other purposes."},"require_approval":{"anyOf":[{"type":"string","enum":["always","never"]},{"type":"object","properties":{"always":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that require approval."}},"description":"A list of tools that always require approval."},"never":{"type":"object","properties":{"tool_names":{"type":"array","items":{"type":"string"},"description":"List of tools that do not require approval."}},"description":"A list of tools that never require approval."}}},{"nullable":true}],"description":"Specify which of the MCP server's tools require approval."}},"required":["server_label","server_url","type"],"description":"Give the model access to additional tools via remote Model Context Protocol (MCP) servers."},{"type":"object","properties":{"type":{"type":"string","enum":["code_interpreter"],"description":"The type of the code interpreter tool. Always code_interpreter."},"container":{"anyOf":[{"type":"string"},{"type":"object","properties":{"type":{"type":"string","enum":["auto"]}},"required":["type"]}],"description":"The container ID."}},"required":["type","container"],"description":"A tool that runs Python code to help generate a response to a prompt."},{"type":"object","properties":{"type":{"type":"string","enum":["local_shell"],"description":"The type of the local shell tool. Always local_shell."}},"required":["type"],"description":"A tool that allows the model to execute shell commands in a local environment."},{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"parameters":{"type":"object","nullable":true,"additionalProperties":{"nullable":true},"description":"A JSON schema object describing the parameters of the function."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enforce strict parameter validation."},"type":{"type":"string","enum":["function"],"description":"The type of the function tool. Always function."},"description":{"type":"string","nullable":true,"description":"A description of the function. Used by the model to determine whether or not to call the function."}},"required":["name","type"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_generation"]},"background":{"type":"string","enum":["transparent","opaque","auto"]},"input_image_mask":{"type":"object","properties":{"file_id":{"type":"string"},"image_url":{"type":"string"}}},"model":{"type":"string","enum":["gpt-image-1"]},"moderation":{"type":"string","enum":["auto","low"]},"output_compression":{"type":"number"},"output_format":{"type":"string","enum":["png","webp","jpeg"]},"partial_images":{"type":"integer","minimum":0,"maximum":3},"quality":{"type":"string","enum":["low","medium","high","auto"]},"size":{"type":"string","enum":["1024x1024","1024x1536","1536x1024","auto"]}},"required":["type"]}]},"description":"An array of tools the model may call while generating a response. You can specify which tool to use by setting the tool_choice parameter."},"top_p":{"type":"number","nullable":true,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n\nWe generally recommend altering this or temperature but not both."},"truncation":{"type":"string","nullable":true,"enum":["auto","disabled"],"description":"The truncation strategy to use for the model response.\n- auto: If the context of this response and previous ones exceeds the model's context window size, the model will truncate the response to fit the context window by dropping input items in the middle of the conversation.\n- disabled (default): If a model response will exceed the context window size for a model, the request will fail with a 400 error.\n"},"usage":{"type":"object","properties":{"input_tokens":{"type":"integer","description":"The number of input tokens."},"input_tokens_details":{"type":"object","nullable":true,"properties":{"cached_tokens":{"type":"integer","description":"The number of tokens that were retrieved from the cache."}},"required":["cached_tokens"],"description":"A detailed breakdown of the input tokens."},"output_tokens":{"type":"integer","description":"The number of output tokens."},"output_tokens_details":{"type":"object","nullable":true,"properties":{"reasoning_tokens":{"type":"integer","description":"The number of reasoning tokens."}},"required":["reasoning_tokens"],"description":"A detailed breakdown of the output tokens."},"total_tokens":{"type":"integer","description":"The total number of tokens used."}},"required":["input_tokens","output_tokens","total_tokens"],"description":"Represents token usage details including input tokens, output tokens, a breakdown of output tokens, and the total tokens used."}},"required":["created_at","id","model","object","parallel_tool_calls"],"description":"The response that was created."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.created"],"description":"The type of the event."}},"required":["response","sequence_number","type"]},{"type":"object","properties":{"content_index":{"type":"number","description":"The index of the content part that was added."},"item_id":{"type":"string","description":"The ID of the output item that the content part was added to."},"output_index":{"type":"number","description":"The index of the output item that the content part was added to."},"part":{"anyOf":[{"type":"object","properties":{"annotations":{"type":"array","items":{"anyOf":[{"type":"object","properties":{"file_id":{"type":"string"},"filename":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_citation"],"description":"The type of the event."}},"required":["file_id","filename","index","type"]},{"type":"object","properties":{"end_index":{"type":"number","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"number","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"type":{"type":"string","enum":["url_citation"],"description":"The type of the event."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","type","url"]},{"type":"object","properties":{"container_id":{"type":"string"},"end_index":{"type":"number"},"file_id":{"type":"string"},"filename":{"type":"string"},"start_index":{"type":"number"},"type":{"type":"string","enum":["container_file_citation"],"description":"The type of the event."}},"required":["container_id","end_index","file_id","filename","start_index","type"]},{"type":"object","properties":{"file_id":{"type":"string"},"index":{"type":"number"},"type":{"type":"string","enum":["file_path"],"description":"The type of the event."}},"required":["file_id","index","type"]}]}},"text":{"type":"string"},"type":{"type":"string","enum":["output_text"],"description":"The type of the event."},"logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"},"top_logprobs":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string"},"bytes":{"type":"array","items":{"type":"number"}},"logprob":{"type":"number"}},"required":["token","bytes","logprob"]}}},"required":["token","bytes","logprob","top_logprobs"]}}},"required":["annotations","text","type","logprobs"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal text that is finalized."},"type":{"type":"string","enum":["refusal"],"description":"The type of the event."}},"required":["refusal","type"]},{"type":"object","properties":{"text":{"type":"string","description":"Configuration options for a text response from the model. Can be plain text or structured JSON data."},"type":{"type":"string","enum":["reasoning_text"],"description":"The type of the event."}},"required":["text","type"]}],"description":"The content part that was added."},"sequence_number":{"type":"number","description":"The sequence number of this event."},"type":{"type":"string","enum":["response.content_part.added"],"description":"The type of the event."}},"required":["content_index","item_id","output_index","part","sequence_number","type"]}]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/o4-mini-2025-04-16", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/o4-mini-2025-04-16', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-BP2srBuXMLGJxtqni8qM5tyAU4RWL', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'refusal': None, 'annotations': []}}], 'created': 1745308985, 'model': 'o4-mini-2025-04-16', 'usage': {'prompt_tokens': 16, 'completion_tokens': 259, 'total_tokens': 275, 'prompt_tokens_details': {'cached_tokens': 0, 'audio_tokens': 0}, 'completion_tokens_details': {'reasoning_tokens': 0, 'audio_tokens': 0, 'accepted_prediction_tokens': 0, 'rejected_prediction_tokens': 0}}, 'system_fingerprint': None} ``` {% endcode %}
## Code Example #2: Using /responses Endpoint {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/responses", headers={ "Content-Type":"application/json", # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"openai/o4-mini-2025-04-16", "input":"Hello" # Insert your question for the model here, instead of Hello } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { try { const response = await fetch('https://api.aimlapi.com/v1/responses', { method: 'POST', headers: { // Insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'openai/o4-mini-2025-04-16', input: 'Hello', // Insert your question here, instead of Hello }), }); if (!response.ok) { throw new Error(`HTTP error! Status ${response.status}`); } const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } catch (error) { console.error('Error', error); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "resp_686ba45ce63481a2a4b1fad55d2bea8102a1cc22f1a1bcf1", "object": "response", "created_at": 1751884892, "error": null, "incomplete_details": null, "instructions": null, "max_output_tokens": 512, "model": "openai/o4-mini-2025-04-16", "output": [ { "id": "rs_686ba463d18481a29dde85cfd7b055bf02a1cc22f1a1bcf1", "type": "reasoning", "summary": [] }, { "id": "msg_686ba463d4e081a2b2e2aff962ab00f702a1cc22f1a1bcf1", "type": "message", "status": "in_progress", "content": [ { "type": "output_text", "annotations": [], "logprobs": [], "text": "Hello! How can I help you today?" } ], "role": "assistant" } ], "parallel_tool_calls": true, "previous_response_id": null, "reasoning": { "effort": "medium", "summary": null }, "temperature": 1, "text": { "format": { "type": "text" } }, "tool_choice": "auto", "tools": [], "top_p": 1, "truncation": "disabled", "usage": { "input_tokens": 294, "input_tokens_details": { "cached_tokens": 0 }, "output_tokens": 2520, "output_tokens_details": { "reasoning_tokens": 0 }, "total_tokens": 2814 }, "metadata": {}, "output_text": "Hello! How can I help you today?" } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition.md # OCR: Optical Character Recognition Optical Character Recognition (OCR) technology enables the extraction of text from images, scanned documents, and PDFs, transforming them into machine-readable formats. Modern OCR systems go beyond simple text recognition, interpreting complex layouts, tables, and even handwritten notes with high accuracy. This technology is widely used in document automation, digital archiving, and accessibility tools, making information more searchable and editable. Advanced OCR models leverage deep learning to enhance recognition capabilities, accurately distinguishing between different fonts, languages, and structures. Some solutions also integrate natural language processing (NLP) to improve contextual understanding, ensuring better accuracy in document digitization. As OCR technology evolves, it continues to bridge the gap between physical and digital content, streamlining workflows across industries. We provide APIs from two providers: **Google** and **Mistral AI**. Test both options by making several trial requests and determine which one best suits your needs. {% content-ref url="../music-models/google" %} [google](https://docs.aimlapi.com/api-references/music-models/google) {% endcontent-ref %} {% content-ref url="ocr-optical-character-recognition/mistral-ai" %} [mistral-ai](https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition/mistral-ai) {% endcontent-ref %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/hume-ai/octave-2.md # octave-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `hume/octave-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced text-to-speech model with improved emotional understanding, support for 11 languages, and sub-200 ms audio generation. It provides more reliable pronunciation of complex and uncommon inputs. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["hume/octave-2"]},"text":{"type":"string","minLength":1,"maxLength":500000,"description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["Vince Douglas","Male English Actor","Ava Song","Campfire Narrator","TikTok Fashion Influencer","Colton Rivers","Literature Professor","Booming American Narrator","Imani Carter","Terrence Bentley","Nature Documentary Narrator","Alice Bennett","Sitcom Girl","Unserious Movie Trailer Narrator","Articulate ASMR British Narrator","Big Dicky","English Children's Book Narrator","Sebastian Lockwood","Donovan Sinclair","Booming British Narrator","Relaxing ASMR Woman","Lady Elizabeth","Male Protagonist","Tough Guy","French Chef","Spanish Instructor","Charming Cowgirl"],"default":"Vince Douglas","description":"Name of the voice to be used."},"format":{"type":"string","enum":["wav","mp3"],"description":"Audio output format. MP3 provides good compression and compatibility, PCM offers uncompressed high quality, and FLAC provides lossless compression."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # Insert your AI/ML API key instead of : api_key = "" base_url = "https://api.aimlapi.com/v1" headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json", } data = { "model": "hume/octave-2", "text": "It is a fast and powerful language model. Use it to convert text to natural sounding spoken text.", "voice": "Relaxing ASMR Woman", } response = requests.post(f"{base_url}/tts", headers=headers, json=data) response.raise_for_status() result = response.json() print(json.dumps(result, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JaveScript" %} {% code overflow="wrap" %} ```javascript import axios from "axios"; // Insert your AI/ML API key instead of : const apiKey = ""; const baseURL = "https://api.aimlapi.com/v1"; const headers = { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }; const data = { model: "inworld/tts-1", text: "It is a fast and powerful language model. Use it to convert text to natural sounding spoken text.", voice: "Deborah", }; const main = async () => { const response = await axios.post(`${baseURL}/tts`, data, { headers }); console.log(response); }; main().catch(console.error); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` { "audio": { "url": "https://cdn.aimlapi.com/generations/hippopotamus/1769604037348-b2b0235e-e813-462d-904e-632803a698b4.wav" }, "meta": { "usage": { "credits_used": 12222 } } } ``` {% endcode %}
Listen to the audio sample we generated (\~ 1.8 s): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/vision-models/ofr-optical-feature-recognition.md # OFR: Optical Feature Recognition Our API provides a feature to extract visual features from images. ## Identify visual features in images. > Performs optical feature recognition (OFR) to identify visual features such as objects, landmarks, or logos from images, aiding in image analysis and categorization. ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Vision.v1.VisionRequestDTO":{"type":"object","properties":{"image":{"type":"object","properties":{"source":{"type":"object","properties":{"imageUri":{"type":"string","description":"The URI of the source image."}},"required":["imageUri"],"additionalProperties":false}},"required":["source"],"additionalProperties":false,"description":"The image to be processed."},"features":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["FACE_DETECTION","LANDMARK_DETECTION","LOGO_DETECTION","LABEL_DETECTION","TEXT_DETECTION","DOCUMENT_TEXT_DETECTION","SAFE_SEARCH_DETECTION","IMAGE_PROPERTIES","CROP_HINTS","WEB_DETECTION","PRODUCT_SEARCH","OBJECT_LOCALIZATION"],"description":"The feature type."},"maxResults":{"type":"number","description":"Maximum number of results of this type."},"model":{"type":"string","enum":["builtin/stable","builtin/latest"],"description":"Model to use for the feature."}},"required":["type"],"additionalProperties":false},"description":"Requested features."},"imageContext":{"type":"object","properties":{"latLongRect":{"type":"object","properties":{"minLatLng":{"type":"object","properties":{"latitude":{"type":"number","description":"The latitude in degrees. It must be in the range [-90.0, +90.0]."},"longitude":{"type":"number","description":"The longitude in degrees. It must be in the range [-180.0, +180.0]."}},"required":["latitude","longitude"],"additionalProperties":false,"description":"Min latitude-longitude pair."},"maxLatLng":{"type":"object","properties":{"latitude":{"type":"number","description":"The latitude in degrees. It must be in the range [-90.0, +90.0]."},"longitude":{"type":"number","description":"The longitude in degrees. It must be in the range [-180.0, +180.0]."}},"required":["latitude","longitude"],"additionalProperties":false,"description":"Max latitude-longitude pair."}},"required":["minLatLng","maxLatLng"],"additionalProperties":false,"description":"Rectangle determined by min and max LatLng (latitude-longitude) pairs."},"languageHints":{"type":"array","items":{"type":"string"},"description":"List of languages to use for TEXT_DETECTION. In most cases, an empty value yields the best results since it enables automatic language detection. For languages based on the Latin alphabet, setting languageHints is not needed. In rare cases, when the language of the text in the image is known, setting a hint will help get better results (although it will be a significant hindrance if the hint is wrong)."},"cropHintsParams":{"type":"object","properties":{"aspectRatios":{"type":"array","items":{"type":"number"},"description":"Aspect ratios in floats, representing the ratio of the width to the height of the image. For example, if the desired aspect ratio is 4/3, the corresponding float value should be 1.33333. If not specified, the best possible crop is returned. The number of provided aspect ratios is limited to a maximum of 16; any aspect ratios provided after the 16th are ignored."}},"required":["aspectRatios"],"additionalProperties":false,"description":"Parameters for crop hints annotation request."},"faceRecognitionParams":{"type":"object","properties":{"celebritySet":{"type":"array","items":{"type":"string"}}},"required":["celebritySet"],"additionalProperties":false,"description":"Parameters for face recognition"},"textDetectionParams":{"type":"object","properties":{"enableTextDetectionConfidenceScore":{"type":"boolean","description":"By default, Cloud Vision API only includes confidence score for DOCUMENT_TEXT_DETECTION result. Set the flag to true to include confidence score for TEXT_DETECTION as well."}},"required":["enableTextDetectionConfidenceScore"],"additionalProperties":false,"description":"Parameters for text detection and document text detection."}},"additionalProperties":false,"description":"Additional context that may accompany the image."}},"required":["image","features"]},"Vision.v1.OCRResponseDTO":{"type":"object","properties":{"pages":{"type":"array","items":{"type":"object","properties":{"index":{"type":"integer","description":"The page index in a PDF document starting from 0"},"markdown":{"type":"string","description":"The markdown string response of the page"},"images":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"Image ID for extracted image in a page"},"top_left_x":{"type":"integer","nullable":true,"description":"X coordinate of top-left corner of the extracted image"},"top_left_y":{"type":"integer","nullable":true,"description":"Y coordinate of top-left corner of the extracted image"},"bottom_right_x":{"type":"integer","nullable":true,"description":"X coordinate of bottom-right corner of the extracted image"},"bottom_right_y":{"type":"integer","nullable":true,"description":"Y coordinate of bottom-right corner of the extracted image"},"image_base64":{"type":"string","nullable":true,"format":"uri","description":"Base64 string of the extracted image"}},"required":["id","top_left_x","top_left_y","bottom_right_x","bottom_right_y"]},"description":"List of all extracted images in the page"},"dimensions":{"type":"object","nullable":true,"properties":{"dpi":{"type":"integer","description":"Dots per inch of the page-image."},"height":{"type":"integer","description":"Height of the image in pixels."},"width":{"type":"integer","description":"Width of the image in pixels."}},"required":["dpi","height","width"],"description":"The dimensions of the PDF page's screenshot image"}},"required":["index","markdown","images","dimensions"]},"description":"List of OCR info for pages"},"model":{"type":"string","enum":["mistral-ocr-latest"],"description":"The model used to generate the OCR."},"usage_info":{"type":"object","properties":{"pages_processed":{"type":"integer","description":"Number of pages processed"},"doc_size_bytes":{"type":"integer","nullable":true,"description":"Document size in bytes"}},"required":["pages_processed","doc_size_bytes"],"description":"Usage info for the OCR request."}},"required":["pages","model","usage_info"]}}},"paths":{"/vision":{"post":{"operationId":"DocumentModelsController_processVisionRequest","summary":"Identify visual features in images.","description":"Performs optical feature recognition (OFR) to identify visual features such as objects, landmarks, or logos from images, aiding in image analysis and categorization.","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"$ref":"#/components/schemas/Vision.v1.VisionRequestDTO"}}}},"responses":{"201":{"description":"Successfully processed document with vision model","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Vision.v1.OCRResponseDTO"}}}}},"tags":["Vision Models"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/video-models/bytedance/omnihuman-1.5.md # OmniHuman 1.5 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/omnihuman/v1.5` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} From a single image and a voice track, this model produces expressive character animations aligned with the speech’s rhythm, intonation, and meaning. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can create a video with this API by providing a reference image of a character and an audio file. The character will deliver the audio with full lip-sync and natural gestures. This POST request creates and submits a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/omnihuman/v1.5"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"audio_url":{"type":"string","format":"uri","description":"The URL of the audio file for lip-sync animation. The model detects spoken parts and syncs the character's mouth to them. Audio must be under 30s long."}},"required":["model","image_url","audio_url"],"title":"bytedance/omnihuman/v1.5"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `generation_id`, obtained from the endpoint described above. If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "bytedance/omnihuman/v1.5", "image_url": "https://cdn.aimlapi.com/assets/content/office_man.png", "audio_url": "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "bytedance/omnihuman/v1.5", image_url: "https://cdn.aimlapi.com/assets/content/office_man.png", audio_url: "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '3fdd5350-31b6-47e0-bb85-80ee2d970f73:bytedance/omnihuman/v1.5', 'status': 'queued', 'meta': {'usage': {'tokens_used': 5040000}}} Generation ID: 3fdd5350-31b6-47e0-bb85-80ee2d970f73:bytedance/omnihuman/v1.5 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '3fdd5350-31b6-47e0-bb85-80ee2d970f73:bytedance/omnihuman/v1.5', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/kangaroo/HSRdq0Z-fMRhwDLV8SM5y_video.mp4'}} ``` {% endcode %}
**Original (1920x1088, with sound)**: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/bytedance/omnihuman.md # OmniHuman {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/omnihuman` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced AI framework from ByteDance that generates realistic lip-sync videos from a single image and motion signals (audio). It supports multiple visual and audio styles and produces videos in any body proportion, with realism enhanced by motion, lighting, and texture details. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can create a video with this API by providing a reference image of a character and an audio file. The character will deliver the audio with full lip-sync and natural gestures. This POST request creates and submits a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/omnihuman"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"audio_url":{"type":"string","format":"uri","description":"The URL of the audio file for lip-sync animation. The model detects spoken parts and syncs the character's mouth to them. Audio must be under 30s long."}},"required":["model","image_url","audio_url"],"title":"bytedance/omnihuman"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `generation_id`, obtained from the endpoint described above. If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "bytedance/omnihuman", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "audio_url": "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "bytedance/omnihuman", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", audio_url: "https://storage.googleapis.com/falserverless/example_inputs/omnihuman_audio.mp3", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '9e730e80-40e2-4461-ba1b-1cc15df10b4f:bytedance/omnihuman', 'status': 'queued', 'meta': {'usage': {'tokens_used': 5880000}}} Generation ID: 9e730e80-40e2-4461-ba1b-1cc15df10b4f:bytedance/omnihuman Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '9e730e80-40e2-4461-ba1b-1cc15df10b4f:bytedance/omnihuman', 'status': 'completed', 'video': {'url': 'https://v3b.fal.media/files/b/tiger/3q9C4sDWWOX63lEz42Ohb_video.mp4'}} ``` {% endcode %}
**Original (with sound)**: [896x1344](https://drive.google.com/file/d/1WOkEu1iB3kl8UeCcEjFj1Qn29i_LBj5Y/view?usp=sharing) **Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/faq/openai-sdk-doesnt-work.md # OpenAI SDK doesn't work? ## Check OpenAI SDK installation Depending on your environment, the steps may differ. For Python and NodeJS, you can proceed to the setup article and check if all steps are completed correctly. ## Check base URL OpenAI SDK is a configurable package. To use it with our AI/ML API server, you need to pass a parameter called `base URL`. Depending on your environment, this process can differ, but here is an example for Python and NodeJS: {% tabs %} {% tab title="Python" %} ```python from openai import OpenAI api_key = "" base_url = "https://api.aimlapi.com/v1" api = OpenAI(api_key=api_key, base_url=base_url) ``` {% endtab %} {% tab title="JavaScript" %} ```javascript const { OpenAI } = require("openai"); const apiKey = "YOUR_AIMLAPI_KEY"; const baseURL = "https://api.aimlapi.com/v1"; const api = new OpenAI({ apiKey, baseURL, }); ``` {% endtab %} {% endtabs %} ## Check API key When you use the AI/ML API, you should use our API key. This API key is listed on your account page, and you should keep it safe. When sending a request to the API, you need to ensure that you have included the API key in your query. Look at the example with the base URL above and check if you passed the correct API key to the `api_key` or `apiKey` parameters. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/openai.md # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/openai.md # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai.md # Source: https://docs.aimlapi.com/api-references/video-models/openai.md # Source: https://docs.aimlapi.com/api-references/image-models/openai.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/openai.md # OpenAI - [gpt-3.5-turbo](/api-references/text-models-llm/openai/gpt-3.5-turbo.md) - [gpt-4](/api-references/text-models-llm/openai/gpt-4.md) - [gpt-4-preview](/api-references/text-models-llm/openai/gpt-4-preview.md) - [gpt-4-turbo](/api-references/text-models-llm/openai/gpt-4-turbo.md) - [gpt-4o](/api-references/text-models-llm/openai/gpt-4o.md) - [gpt-4o-mini](/api-references/text-models-llm/openai/gpt-4o-mini.md) - [gpt-4o-audio-preview](/api-references/text-models-llm/openai/gpt-4o-audio-preview.md) - [gpt-4o-mini-audio-preview](/api-references/text-models-llm/openai/gpt-4o-mini-audio-preview.md) - [gpt-4o-search-preview](/api-references/text-models-llm/openai/gpt-4o-search-preview.md) - [gpt-4o-mini-search-preview](/api-references/text-models-llm/openai/gpt-4o-mini-search-preview.md) - [o1](/api-references/text-models-llm/openai/o1.md) - [o3](/api-references/text-models-llm/openai/o3.md) - [o3-mini](/api-references/text-models-llm/openai/o3-mini.md) - [o3-pro](/api-references/text-models-llm/openai/o3-pro.md) - [gpt-4.1](/api-references/text-models-llm/openai/gpt-4.1.md) - [gpt-4.1-mini](/api-references/text-models-llm/openai/gpt-4.1-mini.md) - [gpt-4.1-nano](/api-references/text-models-llm/openai/gpt-4.1-nano.md) - [o4-mini](/api-references/text-models-llm/openai/o4-mini.md) - [gpt-oss-20b](/api-references/text-models-llm/openai/gpt-oss-20b.md) - [gpt-oss-120b](/api-references/text-models-llm/openai/gpt-oss-120b.md) - [gpt-5](/api-references/text-models-llm/openai/gpt-5.md) - [gpt-5-mini](/api-references/text-models-llm/openai/gpt-5-mini.md) - [gpt-5-nano](/api-references/text-models-llm/openai/gpt-5-nano.md) - [gpt-5-chat](/api-references/text-models-llm/openai/gpt-5-chat.md) - [gpt-5-pro](/api-references/text-models-llm/openai/gpt-5-pro.md) - [gpt-5.1](/api-references/text-models-llm/openai/gpt-5-1.md) - [gpt-5.1-chat-latest](/api-references/text-models-llm/openai/gpt-5-1-chat-latest.md) - [gpt-5.1-codex](/api-references/text-models-llm/openai/gpt-5-1-codex.md) - [gpt-5.1-codex-mini](/api-references/text-models-llm/openai/gpt-5-1-codex-mini.md) - [gpt-5.2](/api-references/text-models-llm/openai/gpt-5.2.md) - [gpt-5.2-chat-latest](/api-references/text-models-llm/openai/gpt-5.2-chat-latest.md) - [gpt-5.2-pro](/api-references/text-models-llm/openai/gpt-5.2-pro.md) - [gpt-5.2-codex](/api-references/text-models-llm/openai/gpt-5.2-codex.md) --- # Source: https://docs.aimlapi.com/integrations/our-integration-list.md # Our Integration List Our API endpoint can be integrated with popular AI workflow platforms and tools, allowing their users to access our models through these environments.
ServiceDescription
AgnoA lightweight library for building agents — AI programs that operate autonomously, use tools, and have memory, knowledge, storage, and reasoning capabilities.
AiderA command-line pair programming tool that connects to OpenAI-compatible APIs. It lets you chat with models to edit your codebase, auto-commit changes, and build software collaboratively from the terminal.
AutoGPTAn open-source platform designed to help you build, test, and run AI agents using a no-code visual interface. It allows users to link LLMs with tools, memory, planning modules, and action chains.
ClineAn open-source AI coding assistant with two working modes (Plan/Act), terminal command execution, and support for the Model Context Protocol (MCP) in VS Code.
continue.devAn open-source IDE extension and hub for rules, tools, and models that let you create, share, and use custom AI code assistants.
CursorAn advanced AI-powered IDE that combines intelligent code completion, inline explanations, and automatic code editing directly inside the editor.
ElizaOSA powerful multi-agent simulation framework designed to create, deploy, and manage autonomous AI agents. Built with TypeScript, it provides a flexible and extensible platform for developing intelligent agents that can interact across multiple platforms while maintaining consistent personalities and knowledge.
GPT ResearcherAn autonomous agent that takes care of the tedious task of research for you, by scraping, filtering and aggregating over 20+ web sources per a single research task.
Kilo CodeAn open-source AI coding assistant and VS Code extension that enables natural-language code generation, debugging, and refactoring through customizable modes (Architect, Code, Debug, etc.). It supports multiple model providers, integrates with the Model Context Protocol (MCP), and allows developers to extend functionality with custom tools and workflows.
LangflowA new visual framework for building multi-agent and RAG applications. It is open-source, Python-powered, fully customizable, and LLM and vector store agnostic. Its intuitive interface allows for easy manipulation of AI building blocks, enabling developers to quickly prototype and turn their ideas into powerful, real-world solutions.
LiteLLMAn open-source Python library that provides a unified API for interacting with multiple large language model providers. It allows developers to switch between different models with minimal code changes, optimizing cost and performance. LiteLLM simplifies integration by offering a single interface for various LLM endpoints, enabling seamless experimentation and deployment across different AI providers.
MakeA powerful, enterprise-scale automation platform. It offers flow control, data manipulation, HTTP/webhooks, AI agents and tools, notes, an MCP server, and many other features at your service.
ManusA workflow and AI-agent orchestration platform that lets users integrate custom APIs, define automation logic, and run LLM-powered tools inside a unified interface. Manus supports custom model backends (such as AI/ML API), prompt templates, request routing, secure secret storage, and visual debugging.
MarvinA Python framework by PrefectHQ for building agentic AI workflows and producing structured outputs. It allows developers to define Tasks (objective-focused units of work) and assign them to specialized Agents (LLM-powered configurations). Marvin supports type-safe results via Pydantic models, integrates with multiple LLM providers through Pydantic AI, and enables orchestration of multi-agent threads for complex workflows.
n8nAn open-source workflow automation tool that lets you connect various services and automate tasks without writing full integrations manually.
Roo CodeAn autonomous AI programming agent that works right inside your editor, such as VS Code. It helps you code faster and smarter — whether you're starting a new project, maintaining existing code, or exploring new technologies.
SillyTavernA locally installed user interface that allows you to interact with text generation LLMs, image generation engines, and TTS voice models. Integration with the AI/ML API currently applies only to LLMs.
ToolhouseA Backend-as-a-Service (BaaS) to build, run, and manage AI agents. Toolhouse simplifies the process of building agents in a local environment and running them in production.
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/perplexity.md # Perplexity - [sonar](/api-references/text-models-llm/perplexity/sonar.md) - [sonar-pro](/api-references/text-models-llm/perplexity/sonar-pro.md) --- # Source: https://docs.aimlapi.com/api-references/video-models/pixverse.md # PixVerse - [v5/text-to-video](/api-references/video-models/pixverse/v5-text-to-video.md) - [v5/image-to-video](/api-references/video-models/pixverse/v5-image-to-video.md) - [v5/transition](/api-references/video-models/pixverse/v5-transition.md) - [v5.5/text-to-video](/api-references/video-models/pixverse/v5-5-text-to-video.md) - [v5.5/image-to-video](/api-references/video-models/pixverse/v5-5-image-to-video.md) - [lip-sync](/api-references/video-models/pixverse/lip-sync.md) --- # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/qwen-image-edit.md # qwen-image-edit {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen-image-edit` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The image editing variant of our 20B [qwen-image](https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/qwen-image) model. It expands the model’s distinctive text rendering abilities to editing tasks, making accurate text modifications within images possible. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen-image-edit"]},"prompt":{"type":"string","maxLength":800,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image":{"type":"string","description":"The image to be edited. Enter the Base64 encoding of the picture or an accessible URL. Image URL: Make sure that the image URL is accessible. Base64-encoded content: The format must be in lowercase."},"negative_prompt":{"type":"string","maxLength":500,"description":"The description of elements to avoid in the generated image."},"watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt","image"],"title":"alibaba/qwen-image-edit"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using an input image and a prompt that defines how it should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/qwen-image-edit", "prompt": "Make the dinosaur sit on a lounge chair with its back to the camera, looking toward the water. The setting sun has almost disappeared below the horizon.", "image": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen-image-edit', prompt: 'Make the dinosaur sit on a lounge chair with its back to the camera, looking toward the water. The setting sun has almost disappeared below the horizon.', image: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png', }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "created": 1756832341, "data": [ { "url": "https://dashscope-result-sgp.oss-ap-southeast-1.aliyuncs.com/7d/06/20250903/1955eee6/ac748d89-d6b1-4d4e-bc65-eea543098bb9-1.png?Expires=1757438140&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=aDhUphXV84V1nPMmdRl49ShSKxY%3D" } ] } ``` {% endcode %}
We obtained the following 1184x896 image by running this code example:

'Make the dinosaur sit on a lounge chair with its back to the camera, looking toward the water.
The setting sun has almost disappeared below the horizon.'

--- # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/qwen-image.md # qwen-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A high-performance image generation base model that can handle intricate text rendering and perform accurate image editing. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen-image"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":1536,"default":1024},"height":{"type":"integer","minimum":512,"maximum":1536,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated image."}},"required":["model","prompt"],"title":"alibaba/qwen-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/qwen-image", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses." } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen-image', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/eagle/files/kangaroo/0WBwo2ruHEK9vpmtxu04G.jpeg", "width": 1024, "height": 768, "content_type": "image/jpeg" } ], "timings": { "inference": 5.732342581963167 }, "seed": 4128479875, "has_nsfw_concepts": [ false ], "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses." } ``` {% endcode %}
We obtained the following 1024x768 image by running this code example:

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-max.md # qwen-max {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `qwen-max` * `qwen-max-2025-01-25` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The large-scale Mixture-of-Experts (MoE) language model. Excels in language understanding and task performance. Supports 29 languages, including Chinese, English, and Arabic. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen-max","qwen-max","qwen-max-2025-01-25"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."}},"required":["model","messages"],"title":"alibaba/qwen-max"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"qwen-max", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'qwen-max', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-62aa6045-cee9-995a-bbf5-e3b7e7f3d683", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I assist you today? 😊" } } ], "created": 1756983980, "model": "qwen-max", "usage": { "prompt_tokens": 30, "completion_tokens": 148, "total_tokens": 178, "prompt_tokens_details": { "cached_tokens": 0 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-plus.md # qwen-plus {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `qwen-plus` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced large language model. Multilingual support, including Chinese and English. Enhanced reasoning capabilities for complex tasks. Improved instruction-following abilities. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen-plus"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."}},"required":["model","messages"],"title":"alibaba/qwen-plus"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"qwen-plus", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'qwen-plus', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-4fda1bd7-a679-95b9-b81d-1bfc6ae98448', 'system_fingerprint': None, 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today? If you have any questions or need help with anything, just let me know! 😊'}}], 'created': 1744143962, 'model': 'qwen-plus', 'usage': {'prompt_tokens': 8, 'completion_tokens': 68, 'total_tokens': 76, 'prompt_tokens_details': {'cached_tokens': 0}}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/embedding-models/alibaba-cloud/qwen-text-embedding-v3.md # qwen-text-embedding-v3 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen-text-embedding-v3` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A compact language model supporting over 100 languages. It features a 4B parameter architecture, a context length of up to 32K tokens, and outputs embeddings with up to 2560 dimensions. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/embeddings > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Embedding.v1.CreateEmbeddingsResponseDTO":{"type":"object","properties":{"object":{"type":"string","enum":["object"]},"data":{"type":"array","items":{"type":"object","properties":{"object":{"type":"string","enum":["embedding"]},"index":{"type":"number"},"embedding":{"type":"array","items":{"type":"number"}}},"required":["object","index","embedding"]}},"model":{"type":"string"},"usage":{"type":"object","properties":{"total_tokens":{"type":"number","nullable":true}}}},"required":["object","data","model","usage"]}}},"paths":{"/v1/embeddings":{"post":{"operationId":"EmbeddingsController_createEmbeddings_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["alibaba/qwen-text-embedding-v3"]},"input":{"anyOf":[{"type":"string","minLength":1},{"type":"array","items":{"type":"string"},"minItems":1}],"description":"Input text to embed, encoded as a string or array of tokens."},"dimensions":{"type":"integer","minimum":64,"maximum":2048,"default":1024,"description":"The number of dimensions for the embedding. Default is 1024."}},"required":["model","input"]}}}},"responses":{"200":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Embedding.v1.CreateEmbeddingsResponseDTO"}}}}},"tags":["Embeddings"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="alibaba/qwen-text-embedding-v3"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "alibaba/qwen-text-embedding-v3", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[-0.1248791441321373, 0.022519821301102638, -0.06440284103155136, -0.0016516941832378507, -0.07079307734966278, 0.06540372222661972, -0.012751608155667782, 0.06559620052576065, -0.02534923516213894, -0.006650084163993597, 0.060668785125017166, 0.03968878090381622, -0.04049718379974365, -0.021557435393333435, 0.03533879667520523, -0.014830361120402813, 0.041382577270269394, 0.017428802326321602, -0.01000880915671587, -0.0008950185729190707, -0.01976739801466465, 0.0008144187740981579, -0.05716570094227791, -0.005095831584185362, 0.00028510671108961105, 0.033548761159181595, -0.06294001638889313, -0.03601246699690819, -0.0015169602120295167, -0.02221185900270939, -0.013598507270216942, -0.001029151026159525, -0.03345252200961113, -0.052122801542282104, -0.009003116749227047, -0.02221185900270939, -0.026638831943273544, -0.006346932612359524, -0.06702052801847458, -0.048042286187410355, 0.024271363392472267, 0.055895350873470306, -0.0032215856481343508, -0.05604933202266693, 0.03940006345510483, -0.026311621069908142, 0.01810247264802456, -0.04115160554647446, -0.007025414612144232, -0.03068085014820099, -0.005654015112668276, 0.01732293888926506, 0.059013482183218, -0.0035175192169845104, -0.014801489189267159, 0.05258474498987198, 0.02967997081577778, -0.044847164303064346, -0.04538610205054283, 0.025214500725269318, -0.023078005760908127, 0.044770173728466034, 0.006929175928235054, -0.026119142770767212, -0.01185658946633339, 0.1010119840502739, -0.010268653742969036, -0.026215381920337677, -0.022616060450673103, -0.0014123007422313094, -0.04103611782193184, -0.015099829062819481, -0.03175872191786766, -0.020922262221574783, -0.06802140921354294, 0.0074151805602014065, 0.0058994232676923275, -0.02323198691010475, 0.006231446284800768, 0.016995728015899658, 0.07225590944290161, 0.00991257093846798, 0.018439307808876038, 0.008223584853112698, -0.04342283681035042, 0.07833818346261978, -0.05154537037014961, 0.055317919701337814, -0.017756013199687004, -0.05323916673660278, -0.02779369428753853, -0.041382577270269394, -0.004922601860016584, -0.04230646789073944, -0.04938962683081627, -0.024117382243275642, 0.0051968819461762905, -0.007617281749844551, 0.035627514123916626, 0.022192610427737236, -0.0244830884039402, 0.04827325791120529, 0.02365543693304062, -0.014108572155237198, 0.00924371276050806, -0.06394089758396149, 0.026773566380143166, 0.04030470550060272, 0.014358792454004288, -0.030103418976068497, 0.0351463183760643, 0.03227841109037399, 0.0032865465618669987, -0.03972727432847023, -0.01701497659087181, -0.010595864616334438, -0.056395791471004486, -0.058205075562000275, 0.003385191084817052, -0.03362575173377991, 0.00024210011179093271, 0.025580206885933876, 0.022673802450299263, 0.0013822262408211827, 0.03447265177965164, -0.038245201110839844, 0.026061400771141052, 0.034106943756341934, 0.017149711027741432, 0.011529378592967987, 0.03972727432847023, 0.02498352900147438, -0.03657064959406853, -0.006438359152525663, -0.08515187352895737, -0.038495421409606934, 0.026022905483841896, -0.01478224154561758, -0.05058298259973526, 0.012626498006284237, 0.005254624877125025, 0.01333866361528635, -0.007795322686433792, -0.035107824951410294, -0.008863571099936962, 0.023501453921198845, 0.02554171159863472, -0.01299220509827137, -0.03156624734401703, -0.022616060450673103, -0.01649528741836548, 0.0016685358714312315, 0.005596271716058254, -0.026658078655600548, 0.0072178915143013, 0.05054448917508125, -0.002359047532081604, 0.05119891092181206, -0.002528667915612459, 0.031027309596538544, -0.02396339923143387, -0.023251235485076904, -0.008916501887142658, -0.017496168613433838, 0.021730665117502213, 0.014888104051351547, 0.02271229960024357, 0.01189508568495512, -0.00544710224494338, 0.004414943512529135, -0.008372753858566284, -0.008002235554158688, -0.00838237814605236, -0.017621278762817383, 0.036185696721076965, 0.032605621963739395, 0.009874075651168823, 1.3016639968554955e-05, -0.004732531029731035, -0.026812061667442322, -0.01202019490301609, -0.01050924975425005, -0.03306756541132927, -0.027350997552275658, -0.0037388678174465895, 0.01798698492348194, 0.02278929017484188, 0.04065116494894028, 0.021384207531809807, 0.01823720522224903, -0.03984276205301285, -0.026523346081376076, 0.02881382219493389, 0.013858351856470108, 0.039765771478414536, -0.005355675704777241, -0.027947675436735153, -0.005557776428759098, -0.01823720522224903, 0.020806774497032166, -0.03626268729567528, 0.030276648700237274, -0.003029108513146639, -0.014993966557085514, -0.048350248485803604, -0.028294134885072708, -0.08484391123056412, -0.001842968282289803, 0.0003948788216803223, -0.010884580202400684, 0.05943693220615387, -0.013925719074904919, -0.01075947005301714, -0.014195187017321587, -0.008935749530792236, -0.043268851935863495, 0.03451114520430565, 0.024175124242901802, -0.005307556129992008, -0.002380701247602701, 0.037379052489995956, 0.04665645211935043, 0.005976414307951927, 0.018805013969540596, 0.014031581580638885, -0.014041204936802387, 0.061669666320085526, 0.029795456677675247, -0.031027309596538544, 0.006082276813685894, 0.01740955375134945, -0.027909180149435997, -0.023116501048207283, -0.027004538103938103, 0.030295897275209427, -0.01723632588982582, 0.022134866565465927, 0.04115160554647446, 0.014984343200922012, 0.002961741527542472, -0.007795322686433792, -0.0004956285702064633, -0.07560500502586365, 0.044154249131679535, -0.024059638381004333, -0.003746085800230503, 0.024059638381004333, 0.011635241098701954, -0.0033009822946041822, 0.05743516981601715, -0.005312368273735046, -0.024386849254369736, 0.02365543693304062, 0.02057580277323723, 0.012414772994816303, 0.04908166080713272, 0.011750726960599422, -0.030238153412938118, 0.03972727432847023, -0.007333377841860056, 0.02931426279246807, -0.04403876140713692, 0.054817479103803635, 0.04242195561528206, 0.002497390378266573, 0.014801489189267159, -0.00832944642752409, 0.025022024288773537, 0.017871499061584473, 0.016254691407084465, 0.027023786678910255, -0.01075947005301714, 0.02046031691133976, -0.007939680479466915, -0.033298540860414505, 0.036243438720703125, 0.04442371800541878, -0.055510398000478745, -0.0017407148843631148, 0.048042286187410355, 0.06132320687174797, -0.04072815552353859, 0.0501980297267437, 0.01954605057835579, -0.001839359407313168, -0.011875837109982967, 0.021307215094566345, 0.037244319915771484, -0.023693932220339775, -0.047233883291482925, -0.014762993901968002, -0.008651846088469028, 0.03628193587064743, 0.0035824801307171583, 0.0074151805602014065, 0.037494540214538574, -0.01757315918803215, -0.1763090342283249, -0.011452388018369675, -0.04153655841946602, 0.01832382008433342, -0.04530911147594452, -0.04349982738494873, 0.003377973334863782, 0.05327766388654709, -0.04985157027840614, 0.0385916605591774, 0.032047439366579056, -0.03189345821738243, -0.022731546312570572, 0.001054413616657257, 0.005095831584185362, 0.02459857426583767, -0.0023361907806247473, -0.00982595607638359, 0.02523374930024147, -0.030103418976068497, -0.01217417698353529, 0.010249406099319458, 0.05065997317433357, -0.057281188666820526, -0.02171141840517521, 0.016418296843767166, 0.017756013199687004, 0.0008643424953334033, -0.02271229960024357, -0.0023758893366903067, 0.01660115085542202, -0.0023289730306714773, -0.004265774041414261, 0.05785861983895302, 0.003989087883383036, 0.006106336135417223, -0.07829968631267548, -0.05662676319479942, -0.009537240490317345, 0.0172651968896389, 0.012251167558133602, 0.03772551193833351, -0.035107824951410294, 0.016119956970214844, 0.03075784258544445, -0.0015313959447667003, -0.03759077936410904, -0.029910942539572716, -0.03056536428630352, 0.058590032160282135, -0.042691420763731, -0.026311621069908142, -0.03451114520430565, -0.014195187017321587, 0.0031806842889636755, -0.01623544469475746, -0.023674683645367622, 0.013733241707086563, -0.022134866565465927, -0.03177797049283981, -0.04338433966040611, -0.035550523549318314, 0.036050960421562195, 0.05216129496693611, 0.0010483987862244248, -0.012607250362634659, 0.009590172208845615, 0.02090301364660263, 0.033163804560899734, -0.031546998769044876, 0.01568688452243805, -0.05054448917508125, -0.005836868193000555, 0.013579259626567364, -9.653929737396538e-05, 0.02348220720887184, -0.01648566499352455, -0.03545428439974785, 0.017890747636556625, -0.08992530405521393, 0.04119010269641876, 0.02837112545967102, 0.03820670768618584, -0.012944085523486137, -0.008127345703542233, -0.006274753715842962, -0.01107705757021904, 0.03535804525017738, 0.03220142051577568, 0.2115708291530609, -0.0038567599840462208, -0.049351129680871964, 0.014291425235569477, 0.00301948469132185, -0.015099829062819481, -0.010884580202400684, -0.048234764486551285, -0.005577024072408676, -0.01879538968205452, 0.005312368273735046, 0.012655369937419891, -0.03589697927236557, -0.0032841407228261232, -0.07252537459135056, 0.03381822630763054, -0.03081558458507061, -0.016678141430020332, 0.01954605057835579, -0.003962622489780188, -0.000775923312176019, 0.0050525241531431675, -0.018150590360164642, 0.016620397567749023, 0.018198709934949875, -0.014512773603200912, 0.0018574041314423084, 0.049543607980012894, 0.003890443593263626, 0.016707012429833412, 0.004381260368973017, -0.025329986587166786, 0.025445474311709404, -0.018420059233903885, -0.010528497397899628, 0.007723143789917231, -0.01651453599333763, -0.03811046853661537, 0.0014748558169230819, 0.0002642048930283636, -0.026850556954741478, -0.03691710904240608, -0.0013713993830606341, 0.01621619611978531, 0.02034483104944229, -0.0489661768078804, 0.04430823028087616, 0.002740392927080393, -0.06544221937656403, 0.007078345399349928, 0.0025262620765715837, -0.023501453921198845, 0.011317653581500053, -0.00256475736387074, -0.03262487053871155, -0.03709033876657486, 0.015590645372867584, -0.023020261898636818, -0.0194594357162714, 0.017534663900732994, 0.019536426290869713, -0.009349575266242027, -0.06070727854967117, 0.0313737690448761, -0.004128633998334408, 0.04192151501774788, -0.006972483359277248, -0.028698336333036423, -0.01433954481035471, -0.033548761159181595, 0.004989969078451395, 0.021133987233042717, 0.038630153983831406, -0.0005275075673125684, -0.0015374108916148543, 0.0007290070643648505, -0.016841746866703033, 0.022981766611337662, 0.03635892644524574, 0.06686654686927795, -0.013762113638222218, -0.040574174374341965, 0.003298576455563307, 0.006746322847902775, 0.017842628061771393, -0.015648389235138893, -0.003349101636558771, 0.0030748217832297087, -0.026715822517871857, -0.0008360724314115942, 0.023867161944508553, -0.03884188085794449, -0.0671360120177269, 0.03845692425966263, -0.012944085523486137, -0.10093499720096588, 0.026850556954741478, 0.028005419299006462, -0.02686980366706848, 0.012472516857087612, -0.0072178915143013, -0.0013377158902585506, -0.011712231673300266, 0.00436441833153367, 0.04715689271688461, -0.02434835396707058, -0.011952828615903854, 0.01937282085418701, -0.011154048144817352, -0.00981152057647705, 0.0010941120563074946, -0.0022195016499608755, -0.05681924149394035, 0.00227002683095634, 0.00857966672629118, -0.013425278477370739, 0.0024312264285981655, 0.004686817526817322, 0.07437315583229065, 0.03302907198667526, -0.03610870614647865, 0.006780005991458893, -0.015956351533532143, 0.052738726139068604, -0.04523212090134621, -0.03483835607767105, 0.01299220509827137, -0.03270186111330986, 0.00364744127728045, 0.04946661740541458, 0.03418393433094025, -0.03239389881491661, 0.018314197659492493, 0.004520806018263102, 0.031412262469530106, 0.03152775019407272, -0.0037268379237502813, 0.006924363784492016, -0.008161029778420925, -0.008805827237665653, -0.0011777193285524845, -0.03560826554894447, -0.0021064213942736387, 0.006067840848118067, 0.01292483787983656, 0.0010074973106384277, -0.009267772547900677, 0.042652927339076996, -0.007203455548733473, 0.023020261898636818, 0.0210184995085001, 0.06605814397335052, 0.007501795422285795, -0.042267974466085434, 0.017063096165657043, -0.01748654432594776, -0.00511507922783494, -0.030411383137106895, -0.010441883467137814, -0.04119010269641876, 0.010441883467137814, -0.06798291206359863, 0.03836068883538246, 0.06532672792673111, 0.019661536440253258, -0.039380814880132675, 0.013261673040688038, 0.06913777440786362, -0.03614719957113266, -0.02873683162033558, 0.027524227276444435, -0.030045676976442337, -0.056896232068538666, 0.005952354520559311, -0.013713994063436985, -0.02159593068063259, -0.046156011521816254, 0.004720501136034727, -0.004058861173689365, -0.003919315058737993, 0.020248591899871826, 0.027697455137968063, 0.036127954721450806, 0.015090204775333405, 0.0269852913916111, 0.008372753858566284, 0.033991456031799316, -0.02121097780764103, -0.029545236378908157, 0.008699965663254261, -0.017833003774285316, 0.09877925366163254, -0.007434428203850985, -0.022808536887168884, 0.05947542563080788, 0.03456888720393181, -0.03526180610060692, 0.07410368323326111, 0.018006233498454094, -0.023174243047833443, 0.012578379362821579, 0.015224939212203026, -0.0049081663601100445, -0.015484782867133617, 0.012722737155854702, -0.016860995441675186, -0.021422702819108963, -0.012905590236186981, -0.027947675436735153, -0.0008745678351260722, -0.06063028797507286, -0.03148925304412842, -0.024213619530200958, 0.017861874774098396, 0.026388611644506454, 0.006707827094942331, -0.01082683727145195, 0.054317038506269455, 0.006356556434184313, -0.0495821014046669, -0.021056994795799255, -0.009137850254774094, 0.08014746755361557, -0.00696767121553421, 0.00638542789965868, 0.014647508040070534, 0.006943611428141594, -0.017303692176938057, 0.02328973077237606, -0.005182445980608463, -0.0016384613700211048, -0.03337553143501282, -0.009619043208658695, 0.014252929948270321, 0.02504127100110054, -0.007496983278542757, -0.02806316316127777, 0.043846286833286285, -0.019007114693522453, 0.014916975982487202, -0.035492777824401855, -0.03131602704524994, 0.04249894618988037, 0.007877125404775143, 0.02303951047360897, 0.0313737690448761, -0.057204198092222214, -0.004843205213546753, 0.015917856246232986, 0.05635729804635048, 0.040766652673482895, 0.02126871980726719, -0.0291795302182436, -0.02465631812810898, -0.00578874908387661, 0.03299057483673096, -0.004311487078666687, -0.013511893339455128, -0.047041404992341995, -0.038129713386297226, 0.031470008194446564, 0.01732293888926506, -0.024117382243275642, -0.00955167692154646, -0.022250354290008545, -0.03270186111330986, 0.016841746866703033, 0.035685256123542786, -0.007266010623425245, -0.08276515454053879, 0.034742116928100586, 0.012944085523486137, -0.04095912724733353, -0.01888200454413891, 0.03258637338876724, -0.003832700429484248, -0.04303788021206856, 0.04423123970627785, -0.010172415524721146, 0.017553912475705147, -0.014474278315901756, 0.06444133818149567, -0.06174665689468384, 0.04627149552106857, -0.010316773317754269, -0.028890814632177353, -0.031854961067438126, 0.0032793288119137287, 0.00982595607638359, 0.005389358848333359, 0.011057809926569462, -0.04303788021206856, 0.02271229960024357, 0.04761883616447449, -0.01817946322262287, -0.04068966209888458, 0.013627379201352596, 0.012029819190502167, -0.010105048306286335, 0.0420369990170002, -0.015783123672008514, -0.003539172699674964, -0.026253877207636833, -0.03037288784980774, 0.010085800662636757, 0.032316904515028, -0.020806774497032166, 0.0028197895735502243, 0.057897113263607025, 0.04592503607273102, 0.027832189574837685, -0.03000718168914318, -0.015571397729218006, 0.006958047393709421, 0.031412262469530106, -0.0005127710173837841, 0.015128700993955135, 0.011529378592967987, -0.011500507593154907, 0.006640460342168808, 0.03639741986989975, -0.004689223598688841, 0.006447982974350452, 0.057897113263607025, -0.043653808534145355, -0.016148829832673073, -0.047734323889017105, 0.048542726784944534, 0.006202574819326401, -0.00028029480017721653, -0.008161029778420925, 0.011452388018369675, -0.022115619853138924, -0.032663363963365555, -0.008570043370127678, 0.016668517142534256, 0.04473168030381203, 0.0062891896814107895, 0.0539705827832222, -0.053701113909482956, 0.03526180610060692, 0.04022771492600441, -0.025368481874465942, 0.010903827846050262, 0.025079766288399696, 0.0042152488604187965, 0.0032288033980876207, 0.01765977405011654, 0.013098067604005337, -0.005807996727526188, 0.011712231673300266, 0.06190063804388046, 0.005562588572502136, -0.004042019136250019, 0.008281327784061432, 0.019392069429159164, -0.038187459111213684, -0.0451936237514019, -0.018949370831251144, 0.02692754752933979, 0.006654895842075348, -0.026812061667442322, 0.020229343324899673, -0.001249898225069046, 0.01648566499352455, 0.018198709934949875, 0.015783123672008514, -0.0027740763034671545, 0.01568688452243805, -0.029776208102703094, -0.00034284984576515853, -0.009426566772162914, 0.018227582797408104, 0.00128237868193537, -0.0602453351020813, 0.0317009799182415, -0.012578379362821579, -0.056588269770145416, 0.013483021408319473, -0.01535004936158657, -0.002010182710364461, 0.0018971024546772242, -0.006698203273117542, 0.010191663168370724, 0.05527942627668381, -0.005860927980393171, -0.007987800054252148, -0.038264449685811996, 0.032162923365831375, 0.020691288635134697, -0.0373598076403141, -0.030411383137106895, -0.043846286833286285, 0.006558657623827457, -0.10008809715509415, 0.019334325566887856, -0.028082409873604774, -0.0033611315302550793, -0.040266212075948715, -0.028467364609241486, -0.009748965501785278, 0.020749032497406006, -0.0021064213942736387, 0.009508369490504265, 0.05474048852920532, -0.020498812198638916, 0.007328565698117018, -0.007169772405177355, 0.0039602164179086685, -0.0007470517884939909, -0.024810299277305603, -0.018718399107456207, 0.027909180149435997, 0.020056115463376045, -0.020036866888403893, -0.02779369428753853, -0.03314455971121788, -0.019382445141673088, -0.00571175804361701, 0.01854516938328743, -0.01067285519093275, 0.05855153501033783, -0.03393371403217316, 0.013059571385383606, -0.04122859612107277, -0.06451832503080368, 0.02253906987607479, 0.024001894518733025, -0.006082276813685894, 0.017842628061771393, 0.011548626236617565, -0.006760758347809315, -0.03880338370800018, -0.01550403144210577, -0.015321177430450916, 0.02717776782810688, -0.03081558458507061, -0.007453675847500563, 0.02071053721010685, 6.763014243915677e-05, -0.05177634209394455, -0.014599388465285301, -0.006260317750275135, 0.027158519253134727, 0.0030892575159668922, 0.014493525959551334, 0.0145897651091218, 0.010586241260170937, 0.030796337872743607, -0.016004471108317375, 0.017842628061771393, 0.008887630887329578, 0.019151471555233, 0.05662676319479942, -0.03252863138914108, 0.016427921131253242, -0.03914984315633774, -0.005350863561034203, -0.030834833160042763, -0.018583664670586586, -0.005841680336743593, -0.01528268214315176, -0.04323035851120949, 0.030642354860901833, 0.01082683727145195, -0.009113791398704052, 0.053624123334884644, -0.03635892644524574, 0.005211317911744118, 0.04681043326854706, -0.03845692425966263, 0.028024666011333466, -0.005418230779469013, -0.028332630172371864, -0.012664993293583393, 0.029776208102703094, 0.0010574210900813341, -0.03870714455842972, 0.045463092625141144, -0.03595472499728203, -0.004835987463593483, -0.004042019136250019, -0.0414210744202137, 0.011548626236617565, -0.023559197783470154, -0.01760203205049038, -0.0006141222547739744, -0.020556554198265076, 0.005168010480701923, 0.030276648700237274, -0.03880338370800018, 0.024964280426502228, 0.013608131557703018, -0.006524974014610052, -0.011943204328417778, -0.01590823382139206, 0.011596745811402798, 0.0013341068988665938, -0.017120838165283203, -0.034164685755968094, 0.014916975982487202, -0.004251338075846434, 0.056703757494688034, 0.00546153774484992, 0.03308681398630142, 0.025888171046972275, -0.008497864007949829, -0.023424463346600533, 0.023020261898636818, 0.014916975982487202, 0.002159352647140622, -0.03401070460677147, 0.008449745364487171, 0.01571575552225113, -0.008555607870221138, 0.018564416095614433, -0.005865739658474922, -0.014397287741303444, 0.006900304462760687, -0.0018297354690730572, 0.01525381114333868, 0.027755199000239372, -0.03000718168914318, 0.01114442478865385, 0.041652046144008636, -0.015080581419169903, 0.0037942049093544483, 0.01525381114333868, 0.02146119810640812, -0.01277085579931736, 0.041652046144008636, 0.025772685185074806, 0.02942975051701069, -0.027774445712566376, -0.06074577569961548, -0.025252996012568474, -0.018612535670399666, -0.01503246184438467, 0.011981699615716934, 0.027832189574837685, -0.01182771846652031, -0.01283822301775217, -0.11279158294200897, -0.044654689729213715, 0.024771803990006447, -0.040073733776807785, 0.01848742552101612, -0.002214689739048481, -0.002634530421346426, 0.04292239621281624, 0.009344763122498989, -0.01676475629210472, 0.02059505134820938, 0.028505859896540642, 0.01258800271898508, 0.026330867782235146, 0.02509901486337185, -0.04095912724733353, -0.02912178635597229, 0.002711521228775382, 0.010432259179651737, -0.019805895164608955, 0.03581998869776726, -0.007338189519941807, -0.021191729232668877, 0.048542726784944534, 0.004568925127387047, -0.031662482768297195, -0.053200673311948776, -0.0420369990170002, 0.007766451220959425, 0.008675905875861645, 0.02429061010479927, 0.027581969276070595, -0.0017491356702521443, 0.024868043139576912, 0.03814896196126938, -0.015677260234951973, -0.012790103442966938, 0.013232801109552383, 0.028294134885072708, 0.007400744594633579, 0.016947608441114426, 0.015811994671821594, -0.01698610559105873, 0.0015590646071359515, -0.010355268605053425, -0.027504978701472282, 0.03383747488260269, 0.025965161621570587, -0.011038562282919884, 0.0872306227684021, -0.037186577916145325, 0.005918670911341906, 0.02812090516090393, -0.0023422057274729013, -0.007121652830392122, -0.0014953064965084195, -0.009118602611124516, 0.019440187141299248, 0.026407858356833458, 0.001731090946123004, 0.01182771846652031, 0.031085053458809853, 0.026658078655600548, 0.00528830848634243, -0.015552150085568428, 0.00214130780659616, -0.0031445948407053947, -0.023001015186309814, -0.016091085970401764, -0.035685256123542786, -0.023328226059675217, 0.0061881388537585735, 0.011692984029650688, -0.009046424180269241, 0.0010742628946900368, -0.006943611428141594, -0.003310606349259615, -0.014705250971019268, -0.034549642354249954, -0.005793560761958361, -0.009845203720033169, -0.012164553627371788, -0.03747529163956642, 0.01665889285504818, -0.017756013199687004, 0.05139138922095299, -5.984834933769889e-05, -0.020133106037974358, 0.04985157027840614, 0.010951947420835495, -0.03501158580183983, -0.018939746543765068, 0.05327766388654709, 0.00946506205946207, -0.0194594357162714, 0.03761002793908119], index=0, object='embedding')], model='text-embedding-v3', object='list', usage=Usage(prompt_tokens=4, total_tokens=4), id='8ef7c577-75e9-9056-ba09-dc4331f81f6d', meta={'usage': {'credits_used': 1}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/alibaba-cloud/qwen-text-embedding-v4.md # qwen-text-embedding-v4 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen-text-embedding-v4` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A compact language model supporting over 100 languages. It features a 4B parameter architecture, a context length of up to 32K tokens, and outputs embeddings with up to 2560 dimensions. Embeddings exhibit tighter intra-cluster cohesion and sharper inter-cluster separation compared to [qwen-text-embedding-v3](https://docs.aimlapi.com/api-references/embedding-models/alibaba-cloud/qwen-text-embedding-v3). ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/embeddings > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Embedding.v1.CreateEmbeddingsResponseDTO":{"type":"object","properties":{"object":{"type":"string","enum":["object"]},"data":{"type":"array","items":{"type":"object","properties":{"object":{"type":"string","enum":["embedding"]},"index":{"type":"number"},"embedding":{"type":"array","items":{"type":"number"}}},"required":["object","index","embedding"]}},"model":{"type":"string"},"usage":{"type":"object","properties":{"total_tokens":{"type":"number","nullable":true}}}},"required":["object","data","model","usage"]}}},"paths":{"/v1/embeddings":{"post":{"operationId":"EmbeddingsController_createEmbeddings_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["alibaba/qwen-text-embedding-v4"]},"input":{"anyOf":[{"type":"string","minLength":1},{"type":"array","items":{"type":"string"},"minItems":1}],"description":"Input text to embed, encoded as a string or array of tokens."},"dimensions":{"type":"integer","minimum":64,"maximum":2048,"default":1024,"description":"The number of dimensions for the embedding. Default is 1024."}},"required":["model","input"]}}}},"responses":{"200":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Embedding.v1.CreateEmbeddingsResponseDTO"}}}}},"tags":["Embeddings"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="alibaba/qwen-text-embedding-v4"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "alibaba/qwen-text-embedding-v4", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[-0.1248791441321373, 0.022519821301102638, -0.06440284103155136, -0.0016516941832378507, -0.07079307734966278, 0.06540372222661972, -0.012751608155667782, 0.06559620052576065, -0.02534923516213894, -0.006650084163993597, 0.060668785125017166, 0.03968878090381622, -0.04049718379974365, -0.021557435393333435, 0.03533879667520523, -0.014830361120402813, 0.041382577270269394, 0.017428802326321602, -0.01000880915671587, -0.0008950185729190707, -0.01976739801466465, 0.0008144187740981579, -0.05716570094227791, -0.005095831584185362, 0.00028510671108961105, 0.033548761159181595, -0.06294001638889313, -0.03601246699690819, -0.0015169602120295167, -0.02221185900270939, -0.013598507270216942, -0.001029151026159525, -0.03345252200961113, -0.052122801542282104, -0.009003116749227047, -0.02221185900270939, -0.026638831943273544, -0.006346932612359524, -0.06702052801847458, -0.048042286187410355, 0.024271363392472267, 0.055895350873470306, -0.0032215856481343508, -0.05604933202266693, 0.03940006345510483, -0.026311621069908142, 0.01810247264802456, -0.04115160554647446, -0.007025414612144232, -0.03068085014820099, -0.005654015112668276, 0.01732293888926506, 0.059013482183218, -0.0035175192169845104, -0.014801489189267159, 0.05258474498987198, 0.02967997081577778, -0.044847164303064346, -0.04538610205054283, 0.025214500725269318, -0.023078005760908127, 0.044770173728466034, 0.006929175928235054, -0.026119142770767212, -0.01185658946633339, 0.1010119840502739, -0.010268653742969036, -0.026215381920337677, -0.022616060450673103, -0.0014123007422313094, -0.04103611782193184, -0.015099829062819481, -0.03175872191786766, -0.020922262221574783, -0.06802140921354294, 0.0074151805602014065, 0.0058994232676923275, -0.02323198691010475, 0.006231446284800768, 0.016995728015899658, 0.07225590944290161, 0.00991257093846798, 0.018439307808876038, 0.008223584853112698, -0.04342283681035042, 0.07833818346261978, -0.05154537037014961, 0.055317919701337814, -0.017756013199687004, -0.05323916673660278, -0.02779369428753853, -0.041382577270269394, -0.004922601860016584, -0.04230646789073944, -0.04938962683081627, -0.024117382243275642, 0.0051968819461762905, -0.007617281749844551, 0.035627514123916626, 0.022192610427737236, -0.0244830884039402, 0.04827325791120529, 0.02365543693304062, -0.014108572155237198, 0.00924371276050806, -0.06394089758396149, 0.026773566380143166, 0.04030470550060272, 0.014358792454004288, -0.030103418976068497, 0.0351463183760643, 0.03227841109037399, 0.0032865465618669987, -0.03972727432847023, -0.01701497659087181, -0.010595864616334438, -0.056395791471004486, -0.058205075562000275, 0.003385191084817052, -0.03362575173377991, 0.00024210011179093271, 0.025580206885933876, 0.022673802450299263, 0.0013822262408211827, 0.03447265177965164, -0.038245201110839844, 0.026061400771141052, 0.034106943756341934, 0.017149711027741432, 0.011529378592967987, 0.03972727432847023, 0.02498352900147438, -0.03657064959406853, -0.006438359152525663, -0.08515187352895737, -0.038495421409606934, 0.026022905483841896, -0.01478224154561758, -0.05058298259973526, 0.012626498006284237, 0.005254624877125025, 0.01333866361528635, -0.007795322686433792, -0.035107824951410294, -0.008863571099936962, 0.023501453921198845, 0.02554171159863472, -0.01299220509827137, -0.03156624734401703, -0.022616060450673103, -0.01649528741836548, 0.0016685358714312315, 0.005596271716058254, -0.026658078655600548, 0.0072178915143013, 0.05054448917508125, -0.002359047532081604, 0.05119891092181206, -0.002528667915612459, 0.031027309596538544, -0.02396339923143387, -0.023251235485076904, -0.008916501887142658, -0.017496168613433838, 0.021730665117502213, 0.014888104051351547, 0.02271229960024357, 0.01189508568495512, -0.00544710224494338, 0.004414943512529135, -0.008372753858566284, -0.008002235554158688, -0.00838237814605236, -0.017621278762817383, 0.036185696721076965, 0.032605621963739395, 0.009874075651168823, 1.3016639968554955e-05, -0.004732531029731035, -0.026812061667442322, -0.01202019490301609, -0.01050924975425005, -0.03306756541132927, -0.027350997552275658, -0.0037388678174465895, 0.01798698492348194, 0.02278929017484188, 0.04065116494894028, 0.021384207531809807, 0.01823720522224903, -0.03984276205301285, -0.026523346081376076, 0.02881382219493389, 0.013858351856470108, 0.039765771478414536, -0.005355675704777241, -0.027947675436735153, -0.005557776428759098, -0.01823720522224903, 0.020806774497032166, -0.03626268729567528, 0.030276648700237274, -0.003029108513146639, -0.014993966557085514, -0.048350248485803604, -0.028294134885072708, -0.08484391123056412, -0.001842968282289803, 0.0003948788216803223, -0.010884580202400684, 0.05943693220615387, -0.013925719074904919, -0.01075947005301714, -0.014195187017321587, -0.008935749530792236, -0.043268851935863495, 0.03451114520430565, 0.024175124242901802, -0.005307556129992008, -0.002380701247602701, 0.037379052489995956, 0.04665645211935043, 0.005976414307951927, 0.018805013969540596, 0.014031581580638885, -0.014041204936802387, 0.061669666320085526, 0.029795456677675247, -0.031027309596538544, 0.006082276813685894, 0.01740955375134945, -0.027909180149435997, -0.023116501048207283, -0.027004538103938103, 0.030295897275209427, -0.01723632588982582, 0.022134866565465927, 0.04115160554647446, 0.014984343200922012, 0.002961741527542472, -0.007795322686433792, -0.0004956285702064633, -0.07560500502586365, 0.044154249131679535, -0.024059638381004333, -0.003746085800230503, 0.024059638381004333, 0.011635241098701954, -0.0033009822946041822, 0.05743516981601715, -0.005312368273735046, -0.024386849254369736, 0.02365543693304062, 0.02057580277323723, 0.012414772994816303, 0.04908166080713272, 0.011750726960599422, -0.030238153412938118, 0.03972727432847023, -0.007333377841860056, 0.02931426279246807, -0.04403876140713692, 0.054817479103803635, 0.04242195561528206, 0.002497390378266573, 0.014801489189267159, -0.00832944642752409, 0.025022024288773537, 0.017871499061584473, 0.016254691407084465, 0.027023786678910255, -0.01075947005301714, 0.02046031691133976, -0.007939680479466915, -0.033298540860414505, 0.036243438720703125, 0.04442371800541878, -0.055510398000478745, -0.0017407148843631148, 0.048042286187410355, 0.06132320687174797, -0.04072815552353859, 0.0501980297267437, 0.01954605057835579, -0.001839359407313168, -0.011875837109982967, 0.021307215094566345, 0.037244319915771484, -0.023693932220339775, -0.047233883291482925, -0.014762993901968002, -0.008651846088469028, 0.03628193587064743, 0.0035824801307171583, 0.0074151805602014065, 0.037494540214538574, -0.01757315918803215, -0.1763090342283249, -0.011452388018369675, -0.04153655841946602, 0.01832382008433342, -0.04530911147594452, -0.04349982738494873, 0.003377973334863782, 0.05327766388654709, -0.04985157027840614, 0.0385916605591774, 0.032047439366579056, -0.03189345821738243, -0.022731546312570572, 0.001054413616657257, 0.005095831584185362, 0.02459857426583767, -0.0023361907806247473, -0.00982595607638359, 0.02523374930024147, -0.030103418976068497, -0.01217417698353529, 0.010249406099319458, 0.05065997317433357, -0.057281188666820526, -0.02171141840517521, 0.016418296843767166, 0.017756013199687004, 0.0008643424953334033, -0.02271229960024357, -0.0023758893366903067, 0.01660115085542202, -0.0023289730306714773, -0.004265774041414261, 0.05785861983895302, 0.003989087883383036, 0.006106336135417223, -0.07829968631267548, -0.05662676319479942, -0.009537240490317345, 0.0172651968896389, 0.012251167558133602, 0.03772551193833351, -0.035107824951410294, 0.016119956970214844, 0.03075784258544445, -0.0015313959447667003, -0.03759077936410904, -0.029910942539572716, -0.03056536428630352, 0.058590032160282135, -0.042691420763731, -0.026311621069908142, -0.03451114520430565, -0.014195187017321587, 0.0031806842889636755, -0.01623544469475746, -0.023674683645367622, 0.013733241707086563, -0.022134866565465927, -0.03177797049283981, -0.04338433966040611, -0.035550523549318314, 0.036050960421562195, 0.05216129496693611, 0.0010483987862244248, -0.012607250362634659, 0.009590172208845615, 0.02090301364660263, 0.033163804560899734, -0.031546998769044876, 0.01568688452243805, -0.05054448917508125, -0.005836868193000555, 0.013579259626567364, -9.653929737396538e-05, 0.02348220720887184, -0.01648566499352455, -0.03545428439974785, 0.017890747636556625, -0.08992530405521393, 0.04119010269641876, 0.02837112545967102, 0.03820670768618584, -0.012944085523486137, -0.008127345703542233, -0.006274753715842962, -0.01107705757021904, 0.03535804525017738, 0.03220142051577568, 0.2115708291530609, -0.0038567599840462208, -0.049351129680871964, 0.014291425235569477, 0.00301948469132185, -0.015099829062819481, -0.010884580202400684, -0.048234764486551285, -0.005577024072408676, -0.01879538968205452, 0.005312368273735046, 0.012655369937419891, -0.03589697927236557, -0.0032841407228261232, -0.07252537459135056, 0.03381822630763054, -0.03081558458507061, -0.016678141430020332, 0.01954605057835579, -0.003962622489780188, -0.000775923312176019, 0.0050525241531431675, -0.018150590360164642, 0.016620397567749023, 0.018198709934949875, -0.014512773603200912, 0.0018574041314423084, 0.049543607980012894, 0.003890443593263626, 0.016707012429833412, 0.004381260368973017, -0.025329986587166786, 0.025445474311709404, -0.018420059233903885, -0.010528497397899628, 0.007723143789917231, -0.01651453599333763, -0.03811046853661537, 0.0014748558169230819, 0.0002642048930283636, -0.026850556954741478, -0.03691710904240608, -0.0013713993830606341, 0.01621619611978531, 0.02034483104944229, -0.0489661768078804, 0.04430823028087616, 0.002740392927080393, -0.06544221937656403, 0.007078345399349928, 0.0025262620765715837, -0.023501453921198845, 0.011317653581500053, -0.00256475736387074, -0.03262487053871155, -0.03709033876657486, 0.015590645372867584, -0.023020261898636818, -0.0194594357162714, 0.017534663900732994, 0.019536426290869713, -0.009349575266242027, -0.06070727854967117, 0.0313737690448761, -0.004128633998334408, 0.04192151501774788, -0.006972483359277248, -0.028698336333036423, -0.01433954481035471, -0.033548761159181595, 0.004989969078451395, 0.021133987233042717, 0.038630153983831406, -0.0005275075673125684, -0.0015374108916148543, 0.0007290070643648505, -0.016841746866703033, 0.022981766611337662, 0.03635892644524574, 0.06686654686927795, -0.013762113638222218, -0.040574174374341965, 0.003298576455563307, 0.006746322847902775, 0.017842628061771393, -0.015648389235138893, -0.003349101636558771, 0.0030748217832297087, -0.026715822517871857, -0.0008360724314115942, 0.023867161944508553, -0.03884188085794449, -0.0671360120177269, 0.03845692425966263, -0.012944085523486137, -0.10093499720096588, 0.026850556954741478, 0.028005419299006462, -0.02686980366706848, 0.012472516857087612, -0.0072178915143013, -0.0013377158902585506, -0.011712231673300266, 0.00436441833153367, 0.04715689271688461, -0.02434835396707058, -0.011952828615903854, 0.01937282085418701, -0.011154048144817352, -0.00981152057647705, 0.0010941120563074946, -0.0022195016499608755, -0.05681924149394035, 0.00227002683095634, 0.00857966672629118, -0.013425278477370739, 0.0024312264285981655, 0.004686817526817322, 0.07437315583229065, 0.03302907198667526, -0.03610870614647865, 0.006780005991458893, -0.015956351533532143, 0.052738726139068604, -0.04523212090134621, -0.03483835607767105, 0.01299220509827137, -0.03270186111330986, 0.00364744127728045, 0.04946661740541458, 0.03418393433094025, -0.03239389881491661, 0.018314197659492493, 0.004520806018263102, 0.031412262469530106, 0.03152775019407272, -0.0037268379237502813, 0.006924363784492016, -0.008161029778420925, -0.008805827237665653, -0.0011777193285524845, -0.03560826554894447, -0.0021064213942736387, 0.006067840848118067, 0.01292483787983656, 0.0010074973106384277, -0.009267772547900677, 0.042652927339076996, -0.007203455548733473, 0.023020261898636818, 0.0210184995085001, 0.06605814397335052, 0.007501795422285795, -0.042267974466085434, 0.017063096165657043, -0.01748654432594776, -0.00511507922783494, -0.030411383137106895, -0.010441883467137814, -0.04119010269641876, 0.010441883467137814, -0.06798291206359863, 0.03836068883538246, 0.06532672792673111, 0.019661536440253258, -0.039380814880132675, 0.013261673040688038, 0.06913777440786362, -0.03614719957113266, -0.02873683162033558, 0.027524227276444435, -0.030045676976442337, -0.056896232068538666, 0.005952354520559311, -0.013713994063436985, -0.02159593068063259, -0.046156011521816254, 0.004720501136034727, -0.004058861173689365, -0.003919315058737993, 0.020248591899871826, 0.027697455137968063, 0.036127954721450806, 0.015090204775333405, 0.0269852913916111, 0.008372753858566284, 0.033991456031799316, -0.02121097780764103, -0.029545236378908157, 0.008699965663254261, -0.017833003774285316, 0.09877925366163254, -0.007434428203850985, -0.022808536887168884, 0.05947542563080788, 0.03456888720393181, -0.03526180610060692, 0.07410368323326111, 0.018006233498454094, -0.023174243047833443, 0.012578379362821579, 0.015224939212203026, -0.0049081663601100445, -0.015484782867133617, 0.012722737155854702, -0.016860995441675186, -0.021422702819108963, -0.012905590236186981, -0.027947675436735153, -0.0008745678351260722, -0.06063028797507286, -0.03148925304412842, -0.024213619530200958, 0.017861874774098396, 0.026388611644506454, 0.006707827094942331, -0.01082683727145195, 0.054317038506269455, 0.006356556434184313, -0.0495821014046669, -0.021056994795799255, -0.009137850254774094, 0.08014746755361557, -0.00696767121553421, 0.00638542789965868, 0.014647508040070534, 0.006943611428141594, -0.017303692176938057, 0.02328973077237606, -0.005182445980608463, -0.0016384613700211048, -0.03337553143501282, -0.009619043208658695, 0.014252929948270321, 0.02504127100110054, -0.007496983278542757, -0.02806316316127777, 0.043846286833286285, -0.019007114693522453, 0.014916975982487202, -0.035492777824401855, -0.03131602704524994, 0.04249894618988037, 0.007877125404775143, 0.02303951047360897, 0.0313737690448761, -0.057204198092222214, -0.004843205213546753, 0.015917856246232986, 0.05635729804635048, 0.040766652673482895, 0.02126871980726719, -0.0291795302182436, -0.02465631812810898, -0.00578874908387661, 0.03299057483673096, -0.004311487078666687, -0.013511893339455128, -0.047041404992341995, -0.038129713386297226, 0.031470008194446564, 0.01732293888926506, -0.024117382243275642, -0.00955167692154646, -0.022250354290008545, -0.03270186111330986, 0.016841746866703033, 0.035685256123542786, -0.007266010623425245, -0.08276515454053879, 0.034742116928100586, 0.012944085523486137, -0.04095912724733353, -0.01888200454413891, 0.03258637338876724, -0.003832700429484248, -0.04303788021206856, 0.04423123970627785, -0.010172415524721146, 0.017553912475705147, -0.014474278315901756, 0.06444133818149567, -0.06174665689468384, 0.04627149552106857, -0.010316773317754269, -0.028890814632177353, -0.031854961067438126, 0.0032793288119137287, 0.00982595607638359, 0.005389358848333359, 0.011057809926569462, -0.04303788021206856, 0.02271229960024357, 0.04761883616447449, -0.01817946322262287, -0.04068966209888458, 0.013627379201352596, 0.012029819190502167, -0.010105048306286335, 0.0420369990170002, -0.015783123672008514, -0.003539172699674964, -0.026253877207636833, -0.03037288784980774, 0.010085800662636757, 0.032316904515028, -0.020806774497032166, 0.0028197895735502243, 0.057897113263607025, 0.04592503607273102, 0.027832189574837685, -0.03000718168914318, -0.015571397729218006, 0.006958047393709421, 0.031412262469530106, -0.0005127710173837841, 0.015128700993955135, 0.011529378592967987, -0.011500507593154907, 0.006640460342168808, 0.03639741986989975, -0.004689223598688841, 0.006447982974350452, 0.057897113263607025, -0.043653808534145355, -0.016148829832673073, -0.047734323889017105, 0.048542726784944534, 0.006202574819326401, -0.00028029480017721653, -0.008161029778420925, 0.011452388018369675, -0.022115619853138924, -0.032663363963365555, -0.008570043370127678, 0.016668517142534256, 0.04473168030381203, 0.0062891896814107895, 0.0539705827832222, -0.053701113909482956, 0.03526180610060692, 0.04022771492600441, -0.025368481874465942, 0.010903827846050262, 0.025079766288399696, 0.0042152488604187965, 0.0032288033980876207, 0.01765977405011654, 0.013098067604005337, -0.005807996727526188, 0.011712231673300266, 0.06190063804388046, 0.005562588572502136, -0.004042019136250019, 0.008281327784061432, 0.019392069429159164, -0.038187459111213684, -0.0451936237514019, -0.018949370831251144, 0.02692754752933979, 0.006654895842075348, -0.026812061667442322, 0.020229343324899673, -0.001249898225069046, 0.01648566499352455, 0.018198709934949875, 0.015783123672008514, -0.0027740763034671545, 0.01568688452243805, -0.029776208102703094, -0.00034284984576515853, -0.009426566772162914, 0.018227582797408104, 0.00128237868193537, -0.0602453351020813, 0.0317009799182415, -0.012578379362821579, -0.056588269770145416, 0.013483021408319473, -0.01535004936158657, -0.002010182710364461, 0.0018971024546772242, -0.006698203273117542, 0.010191663168370724, 0.05527942627668381, -0.005860927980393171, -0.007987800054252148, -0.038264449685811996, 0.032162923365831375, 0.020691288635134697, -0.0373598076403141, -0.030411383137106895, -0.043846286833286285, 0.006558657623827457, -0.10008809715509415, 0.019334325566887856, -0.028082409873604774, -0.0033611315302550793, -0.040266212075948715, -0.028467364609241486, -0.009748965501785278, 0.020749032497406006, -0.0021064213942736387, 0.009508369490504265, 0.05474048852920532, -0.020498812198638916, 0.007328565698117018, -0.007169772405177355, 0.0039602164179086685, -0.0007470517884939909, -0.024810299277305603, -0.018718399107456207, 0.027909180149435997, 0.020056115463376045, -0.020036866888403893, -0.02779369428753853, -0.03314455971121788, -0.019382445141673088, -0.00571175804361701, 0.01854516938328743, -0.01067285519093275, 0.05855153501033783, -0.03393371403217316, 0.013059571385383606, -0.04122859612107277, -0.06451832503080368, 0.02253906987607479, 0.024001894518733025, -0.006082276813685894, 0.017842628061771393, 0.011548626236617565, -0.006760758347809315, -0.03880338370800018, -0.01550403144210577, -0.015321177430450916, 0.02717776782810688, -0.03081558458507061, -0.007453675847500563, 0.02071053721010685, 6.763014243915677e-05, -0.05177634209394455, -0.014599388465285301, -0.006260317750275135, 0.027158519253134727, 0.0030892575159668922, 0.014493525959551334, 0.0145897651091218, 0.010586241260170937, 0.030796337872743607, -0.016004471108317375, 0.017842628061771393, 0.008887630887329578, 0.019151471555233, 0.05662676319479942, -0.03252863138914108, 0.016427921131253242, -0.03914984315633774, -0.005350863561034203, -0.030834833160042763, -0.018583664670586586, -0.005841680336743593, -0.01528268214315176, -0.04323035851120949, 0.030642354860901833, 0.01082683727145195, -0.009113791398704052, 0.053624123334884644, -0.03635892644524574, 0.005211317911744118, 0.04681043326854706, -0.03845692425966263, 0.028024666011333466, -0.005418230779469013, -0.028332630172371864, -0.012664993293583393, 0.029776208102703094, 0.0010574210900813341, -0.03870714455842972, 0.045463092625141144, -0.03595472499728203, -0.004835987463593483, -0.004042019136250019, -0.0414210744202137, 0.011548626236617565, -0.023559197783470154, -0.01760203205049038, -0.0006141222547739744, -0.020556554198265076, 0.005168010480701923, 0.030276648700237274, -0.03880338370800018, 0.024964280426502228, 0.013608131557703018, -0.006524974014610052, -0.011943204328417778, -0.01590823382139206, 0.011596745811402798, 0.0013341068988665938, -0.017120838165283203, -0.034164685755968094, 0.014916975982487202, -0.004251338075846434, 0.056703757494688034, 0.00546153774484992, 0.03308681398630142, 0.025888171046972275, -0.008497864007949829, -0.023424463346600533, 0.023020261898636818, 0.014916975982487202, 0.002159352647140622, -0.03401070460677147, 0.008449745364487171, 0.01571575552225113, -0.008555607870221138, 0.018564416095614433, -0.005865739658474922, -0.014397287741303444, 0.006900304462760687, -0.0018297354690730572, 0.01525381114333868, 0.027755199000239372, -0.03000718168914318, 0.01114442478865385, 0.041652046144008636, -0.015080581419169903, 0.0037942049093544483, 0.01525381114333868, 0.02146119810640812, -0.01277085579931736, 0.041652046144008636, 0.025772685185074806, 0.02942975051701069, -0.027774445712566376, -0.06074577569961548, -0.025252996012568474, -0.018612535670399666, -0.01503246184438467, 0.011981699615716934, 0.027832189574837685, -0.01182771846652031, -0.01283822301775217, -0.11279158294200897, -0.044654689729213715, 0.024771803990006447, -0.040073733776807785, 0.01848742552101612, -0.002214689739048481, -0.002634530421346426, 0.04292239621281624, 0.009344763122498989, -0.01676475629210472, 0.02059505134820938, 0.028505859896540642, 0.01258800271898508, 0.026330867782235146, 0.02509901486337185, -0.04095912724733353, -0.02912178635597229, 0.002711521228775382, 0.010432259179651737, -0.019805895164608955, 0.03581998869776726, -0.007338189519941807, -0.021191729232668877, 0.048542726784944534, 0.004568925127387047, -0.031662482768297195, -0.053200673311948776, -0.0420369990170002, 0.007766451220959425, 0.008675905875861645, 0.02429061010479927, 0.027581969276070595, -0.0017491356702521443, 0.024868043139576912, 0.03814896196126938, -0.015677260234951973, -0.012790103442966938, 0.013232801109552383, 0.028294134885072708, 0.007400744594633579, 0.016947608441114426, 0.015811994671821594, -0.01698610559105873, 0.0015590646071359515, -0.010355268605053425, -0.027504978701472282, 0.03383747488260269, 0.025965161621570587, -0.011038562282919884, 0.0872306227684021, -0.037186577916145325, 0.005918670911341906, 0.02812090516090393, -0.0023422057274729013, -0.007121652830392122, -0.0014953064965084195, -0.009118602611124516, 0.019440187141299248, 0.026407858356833458, 0.001731090946123004, 0.01182771846652031, 0.031085053458809853, 0.026658078655600548, 0.00528830848634243, -0.015552150085568428, 0.00214130780659616, -0.0031445948407053947, -0.023001015186309814, -0.016091085970401764, -0.035685256123542786, -0.023328226059675217, 0.0061881388537585735, 0.011692984029650688, -0.009046424180269241, 0.0010742628946900368, -0.006943611428141594, -0.003310606349259615, -0.014705250971019268, -0.034549642354249954, -0.005793560761958361, -0.009845203720033169, -0.012164553627371788, -0.03747529163956642, 0.01665889285504818, -0.017756013199687004, 0.05139138922095299, -5.984834933769889e-05, -0.020133106037974358, 0.04985157027840614, 0.010951947420835495, -0.03501158580183983, -0.018939746543765068, 0.05327766388654709, 0.00946506205946207, -0.0194594357162714, 0.03761002793908119], index=0, object='embedding')], model='text-embedding-v3', object='list', usage=Usage(prompt_tokens=4, total_tokens=4), id='8ef7c577-75e9-9056-ba09-dc4331f81f6d', meta={'usage': {'credits_used': 1}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-turbo.md # qwen-turbo {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `qwen-turbo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This model is designed to enhance both the performance and efficiency of AI agents developed on the Alibaba Cloud Model Studio platform. Optimized for speed and precision in generative AI application development. Improves AI agent comprehension and adaptation to enterprise data, especially when integrated with Retrieval-Augmented Generation (RAG) architectures.\ Large context window (1,000,000 tokens). {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen-turbo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."}},"required":["model","messages"],"title":"alibaba/qwen-turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"qwen-turbo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'qwen-turbo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'chatcmpl-a4556a4c-f985-9ef2-b976-551ac7cef85a', 'system_fingerprint': None, 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': "Hello! How can I help you today? Is there something you would like to talk about or learn more about? I'm here to help with any questions you might have."}}], 'created': 1744144035, 'model': 'qwen-turbo', 'usage': {'prompt_tokens': 1, 'completion_tokens': 15, 'total_tokens': 16, 'prompt_tokens_details': {'cached_tokens': 0}}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-72b-instruct-turbo.md # Qwen2.5-72B-Instruct-Turbo {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `Qwen/Qwen2.5-72B-Instruct-Turbo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art large language model designed for a variety of natural language processing tasks, including instruction following, coding assistance, and mathematical problem-solving. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["Qwen/Qwen2.5-72B-Instruct-Turbo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"Qwen/Qwen2.5-72B-Instruct-Turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"Qwen/Qwen2.5-72B-Instruct-Turbo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'Qwen/Qwen2.5-72B-Instruct-Turbo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npK4dJH-4yUbBN-92d488799a225ec1', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific.', 'tool_calls': []}}], 'created': 1744144336, 'model': 'Qwen/Qwen2.5-72B-Instruct-Turbo', 'usage': {'prompt_tokens': 76, 'completion_tokens': 73, 'total_tokens': 149}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-7b-instruct-turbo.md # Qwen2.5-7B-Instruct-Turbo {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `Qwen/Qwen2.5-7B-Instruct-Turbo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A cutting-edge large language model designed to understand and generate text based on specific instructions. It excels in various tasks, including coding, mathematical problem-solving, and generating structured outputs. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["Qwen/Qwen2.5-7B-Instruct-Turbo"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"Qwen/Qwen2.5-7B-Instruct-Turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"Qwen/Qwen2.5-7B-Instruct-Turbo", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'Qwen/Qwen2.5-7B-Instruct-Turbo', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npK4C7y-3NKUce-92d4866b1e62ef98', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'tool_calls': []}}], 'created': 1744144252, 'model': 'Qwen/Qwen2.5-7B-Instruct-Turbo', 'usage': {'prompt_tokens': 19, 'completion_tokens': 6, 'total_tokens': 25}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-coder-32b-instruct.md # Qwen2.5-Coder-32B-Instruct {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `Qwen/Qwen2.5-Coder-32B-Instruct` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The 32B variant of the latest code-focused model series (formerly CodeQwen). The most capable, with strong performance in coding, math, and general tasks. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["Qwen/Qwen2.5-Coder-32B-Instruct"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"echo":{"type":"boolean","description":"If True, the response will contain the prompt. Can be used with logprobs to return prompt logprobs."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"Qwen/Qwen2.5-Coder-32B-Instruct"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","const":"chat.completion","description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":["string","null"],"description":"The refusal message generated by the model."},"annotations":{"anyOf":[{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","const":"url_citation","description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"additionalProperties":false,"description":"A URL citation when using web search."}},"required":["type","url_citation"],"additionalProperties":false}},{"type":"null"}],"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"anyOf":[{"type":"object","properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"additionalProperties":false},{"type":"null"}],"description":"A chat completion message generated by the model."},"tool_calls":{"anyOf":[{"type":"array","items":{"anyOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","const":"function","description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"additionalProperties":false,"description":"The function that the model called."}},"required":["id","type","function"],"additionalProperties":false},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","const":"custom","description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"additionalProperties":false,"description":"The custom tool that the model called."}},"required":["id","type","custom"],"additionalProperties":false}]}},{"type":"null"}],"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"additionalProperties":false,"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"anyOf":[{"type":"object","properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"anyOf":[{"type":"array","items":{"type":"object","properties":{"bytes":{"anyOf":[{"type":"array","items":{"type":"integer"}},{"type":"null"}],"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"],"additionalProperties":false}},{"type":"null"}],"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"],"additionalProperties":false},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"$ref":"#/properties/choices/items/properties/logprobs/anyOf/0/properties/content/items"},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"additionalProperties":false},{"type":"null"}],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"],"additionalProperties":false}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"anyOf":[{"type":"object","properties":{"accepted_prediction_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"additionalProperties":false},{"type":"null"}],"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"anyOf":[{"type":"object","properties":{"audio_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"Audio input tokens present in the prompt."},"cached_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"Cached tokens present in the prompt."}},"additionalProperties":false},{"type":"null"}],"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"additionalProperties":false,"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"anyOf":[{"type":"object","properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":["string","null"],"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"anyOf":[{"type":"array","items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"additionalProperties":false,"description":"The function that the model called."},"type":{"type":"string","const":"function","description":"The type of the tool."}},"required":["index","id","function","type"],"additionalProperties":false}},{"type":"null"}],"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"additionalProperties":false},{"type":"null"}],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"anyOf":[{"type":"object","properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"anyOf":[{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"],"additionalProperties":false}},{"type":"null"}],"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"],"additionalProperties":false}},"refusal":{"type":"array","items":{"$ref":"#/properties/choices/items/properties/logprobs/anyOf/0/properties/content/items"}}},"required":["content","refusal"],"additionalProperties":false},{"type":"null"}],"description":"Log probability information for the choice."}},"required":["finish_reason","index"],"additionalProperties":false},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","const":"chat.completion.chunk","description":"The object type."},"service_tier":{"anyOf":[{"type":"string","enum":["auto","default","flex","scale","priority"]},{"type":"null"}],"description":"Specifies the processing type used for serving the request."},"usage":{"anyOf":[{"anyOf":[{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"anyOf":[{"type":"object","properties":{"accepted_prediction_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"additionalProperties":false},{"type":"null"}],"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"anyOf":[{"type":"object","properties":{"audio_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"Audio input tokens present in the prompt."},"cached_tokens":{"anyOf":[{"type":"integer"},{"type":"null"}],"description":"Cached tokens present in the prompt."}},"additionalProperties":false},{"type":"null"}],"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"additionalProperties":false},{"type":"null"}]},{"type":"null"}],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"],"additionalProperties":false,"$schema":"http://json-schema.org/draft-07/schema#"}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"Qwen/Qwen2.5-Coder-32B-Instruct", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'Qwen/Qwen2.5-Coder-32B-Instruct', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'npK8TA2-4yUbBN-92d49ab20aeacfa2', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': 'Hello! How can I assist you today?', 'tool_calls': []}}], 'created': 1744145083, 'model': 'Qwen/Qwen2.5-Coder-32B-Instruct', 'usage': {'prompt_tokens': 50, 'completion_tokens': 17, 'total_tokens': 67}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-235b-a22b-thinking-2507.md # qwen3-235b-a22b-thinking-2507 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-235b-a22b-thinking-2507` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Significantly improved performance on reasoning tasks, including logical reasoning, mathematics, science, coding, and academic benchmarks that typically require human expertise. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-235b-a22b-thinking-2507"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"alibaba/qwen3-235b-a22b-thinking-2507"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-235b-a22b-thinking-2507", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], "enable_thinking": False } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-235b-a22b-thinking-2507', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-af05df1d-5b72-925e-b3a9-437acbd89b1a", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! 😊 How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific!", "reasoning_content": "Okay, the user said \"Hello\". That's a simple greeting. I should respond in a friendly and welcoming way. Let me make sure to keep it open-ended so they feel comfortable to ask questions or share what's on their mind. Maybe add a smiley emoji to keep it warm. Let me check if there's anything else they might need. Since it's just a hello, probably not much more needed here. Just a polite reply." } } ], "created": 1753871154, "model": "qwen3-235b-a22b-thinking-2507", "usage": { "prompt_tokens": 13, "completion_tokens": 2187, "total_tokens": 2200 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-235b-a22b.md # Qwen3-235B-A22B {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following model: * `Qwen/Qwen3-235B-A22B-fp8-tput` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A hybrid instruct-and-reasoning text model. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["Qwen/Qwen3-235B-A22B-fp8-tput"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the developer message."},"role":{"type":"string","enum":["developer"],"description":"The role of the author of the message — in this case, the developer."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["content","role"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"n":{"type":"integer","nullable":true,"minimum":1,"description":"How many chat completion choices to generate for each input message. Note that you will be charged based on the number of generated tokens across all of the choices. Keep n as 1 to minimize costs."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"min_p":{"type":"number","minimum":0.001,"maximum":0.999,"description":"A number between 0.001 and 0.999 that can be used as an alternative to top_p and top_k."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"Qwen/Qwen3-235B-A22B-fp8-tput"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"Qwen/Qwen3-235B-A22B-fp8-tput", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'Qwen/Qwen3-235B-A22B-fp8-tput', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'ntFB5Ap-6UHjtw-93cab7642d14efac', 'object': 'chat.completion', 'choices': [{'index': 0, 'finish_reason': 'stop', 'logprobs': None, 'message': {'role': 'assistant', 'content': '\nOkay, the user just said "Hello". I should respond in a friendly and welcoming manner. Let me make sure to greet them back and offer assistance. Maybe say something like, "Hello! How can I help you today?" That should be open-ended and inviting for them to ask questions or share what\'s on their mind. Keep it simple and positive.\n\n\nHello! How can I help you today? 😊', 'tool_calls': []}}], 'created': 1746725755, 'model': 'Qwen/Qwen3-235B-A22B-fp8-tput', 'usage': {'prompt_tokens': 4, 'completion_tokens': 111, 'total_tokens': 115}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-32b.md # qwen3-32b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-32b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A world-class model with comparable quality to DeepSeek R1 while outperforming [GPT-4.1](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1) and [Claude Sonnet 3.7](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.7-sonnet). Optimized for both complex reasoning and efficient dialogue. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-32b"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"enable_thinking":{"type":"boolean","default":false,"description":"Specifies whether to use the thinking mode."},"thinking_budget":{"type":"integer","minimum":1,"description":"The maximum reasoning length, effective only when enable_thinking is set to true."}},"required":["model","messages"],"title":"alibaba/qwen3-32b"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example #1: Without Thinking and Streaming {% hint style="warning" %} `enable_thinking` must be set to `false` for non-streaming calls. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-32b", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], "enable_thinking": False } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-32b', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-1d8a5aa6-34ce-9832-a296-d312b944b437", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I assist you today? 😊", "reasoning_content": "" } } ], "created": 1756990273, "model": "qwen3-32b", "usage": { "prompt_tokens": 19, "completion_tokens": 65, "total_tokens": 84 } } ``` {% endcode %}
## Code Example #2: Enable Thinking and Streaming {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-32b", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], "enable_thinking": True, "stream": True } ) print(response.text) ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"role":"assistant","refusal":null,"reasoning_content":""},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"refusal":null,"reasoning_content":"Okay"},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"refusal":null,"reasoning_content":","},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"refusal":null,"reasoning_content":" the"},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"refusal":null,"reasoning_content":" user said \"Hello\". I should respond in a friendly and welcoming manner. Let"},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"refusal":null,"reasoning_content":" me make sure to acknowledge their greeting and offer assistance. Maybe something like, \""},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"refusal":null,"reasoning_content":"Hello! How can I assist you today?\" That's simple and open-ended."},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"refusal":null,"reasoning_content":" I need to check if there's any specific context I should consider, but since"},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":null,"refusal":null,"reasoning_content":" there's none, a general response is fine. Alright, that should work."},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":"Hello! How can I assist you today?","refusal":null,"reasoning_content":null},"index":0,"finish_reason":null}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[{"delta":{"content":"","refusal":null,"reasoning_content":null},"index":0,"finish_reason":"stop"}],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":null} data: {"id":"chatcmpl-81964e30-1a7c-9668-b78c-a750587ec497","choices":[],"created":1753944369,"model":"qwen3-32b","object":"chat.completion.chunk","usage":{"prompt_tokens":13,"completion_tokens":2010,"total_tokens":2023,"completion_tokens_details":{"reasoning_tokens":82}}} ``` {% endcode %}
The example above prints the raw output of the model. The text is typically split into multiple chunks. While this is helpful for debugging, if your goal is to evaluate the model's reasoning and get a clean, human-readable response, you should aggregate both the reasoning and the final answer in a loop — for example:
Example with response parsing {% code overflow="wrap" %} ```python import requests import json response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer b72af53a19ea41caaf5a74ba1f6fc62b", "Content-Type": "application/json", }, json={ "model": "alibaba/qwen3-32b", "messages": [ { "role": "user", # Insert your question for the model here, instead of Hello: "content": "Hello" } ], "stream": True, } ) answer = "" reasoning = "" for line in response.iter_lines(): if not line or not line.startswith(b"data:"): continue try: raw = line[6:].decode("utf-8").strip() if raw == "[DONE]": continue data = json.loads(raw) choices = data.get("choices") if not choices or "delta" not in choices[0]: continue delta = choices[0]["delta"] content_piece = delta.get("content") reasoning_piece = delta.get("reasoning_content") if content_piece: answer += content_piece if reasoning_piece: reasoning += reasoning_piece except Exception as e: print(f"Error parsing chunk: {e}") print("\n--- MODEL REASONING ---") print(reasoning.strip()) print("\n--- MODEL RESPONSE ---") print(answer.strip()) ``` {% endcode %}
After running such code, you'll receive only the model's textual output in a clear and structured format:
Response {% code overflow="wrap" %} ```json5 --- MODEL REASONING --- Okay, the user sent "Hello". I need to respond appropriately. Since it's a greeting, I should reply in a friendly and welcoming manner. Maybe ask how I can assist them. Keep it simple and open-ended to encourage them to share what they need help with. Let me make sure the tone is positive and helpful. --- MODEL RESPONSE --- Hello! How can I assist you today? 😊 ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-coder-480b-a35b-instruct.md # qwen3-coder-480b-a35b-instruct {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-coder-480b-a35b-instruct` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The most powerful model in the Qwen3 Coder series — a 480B-parameter MoE architecture with 35B active parameters. It natively supports a 256K token context and can handle up to 1M tokens using extrapolation techniques, delivering outstanding performance in both coding and agentic tasks. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-coder-480b-a35b-instruct"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"alibaba/qwen3-coder-480b-a35b-instruct"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-coder-480b-a35b-instruct", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], "enable_thinking": False } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-coder-480b-a35b-instruct', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-f906efa6-f816-9a06-a32b-aa38da5fe11a", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I help you today?" } } ], "created": 1753866642, "model": "qwen3-coder-480b-a35b-instruct", "usage": { "prompt_tokens": 28, "completion_tokens": 142, "total_tokens": 170 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-max-instruct.md # qwen3-max-instruct {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-max-instruct` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This model offers improved accuracy in math, coding, logic, and science, handles complex instructions in Chinese and English more reliably, reduces hallucinations, supports 100+ languages with stronger translation and commonsense reasoning, and is optimized for RAG and tool use, though it lacks a dedicated ‘thinking’ mode. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-max-instruct"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."}},"required":["model","messages"],"title":"alibaba/qwen3-max-instruct"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-max-instruct", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-max-instruct', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-bec5dc33-8f63-96b9-89a4-00aecfce7af8", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I help you today?" } } ], "created": 1758898624, "model": "qwen3-max", "usage": { "prompt_tokens": 23, "completion_tokens": 113, "total_tokens": 136 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-max-preview.md # qwen3-max-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-max-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The preview version of [Qwen3 Max Instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-max-instruct). {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-max-preview"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."}},"required":["model","messages"],"title":"alibaba/qwen3-max-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-max-preview", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-max-preview', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-8ffebc65-b625-926a-8208-b765371cb1d0", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I assist you today? 😊" } } ], "created": 1758898044, "model": "qwen3-max-preview", "usage": { "prompt_tokens": 23, "completion_tokens": 139, "total_tokens": 162 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-next-80b-a3b-instruct.md # qwen3-next-80b-a3b-instruct {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-next-80b-a3b-instruct` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An instruction-tuned chat model optimized for fast, stable replies without reasoning traces, designed for complex tasks in reasoning, coding, knowledge QA, and multilingual use, with strong alignment and formatting. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-next-80b-a3b-instruct"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."},"logprobs":{"type":"boolean","nullable":true,"description":"Whether to return log probabilities of the output tokens or not. If True, returns the log probabilities of each output token returned in the content of message."},"top_logprobs":{"type":"number","nullable":true,"minimum":0,"maximum":20,"description":"An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. logprobs must be set to True if this parameter is used."}},"required":["model","messages"],"title":"alibaba/qwen3-next-80b-a3b-instruct"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-next-80b-a3b-instruct", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], "enable_thinking": False } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-next-80b-a3b-instruct', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-a944254a-4252-9a54-af1b-94afcfb9807e", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I help you today? 😊" } } ], "created": 1758228572, "model": "qwen3-next-80b-a3b-instruct", "usage": { "prompt_tokens": 9, "completion_tokens": 46, "total_tokens": 55 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-next-80b-a3b-thinking.md # qwen3-next-80b-a3b-thinking {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-next-80b-a3b-thinking` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model may take longer to generate reasoning content than its predecessor. Alibaba Cloud strongly recommends its use for highly complex reasoning tasks. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-next-80b-a3b-thinking"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"alibaba/qwen3-next-80b-a3b-thinking"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-next-80b-a3b-thinking", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ], "enable_thinking": False } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-next-80b-a3b-thinking', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-576aaaf9-f712-9114-b098-c1ee83fbfb6b", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! 😊 How can I assist you today?", "reasoning_content": "Okay, the user said \"Hello\". I need to respond appropriately. Let me think.\n\nFirst, I should acknowledge their greeting. A simple \"Hello!\" would be good. Maybe add a friendly emoji to keep it warm.\n\nWait, but maybe they want to start a conversation. I should ask how I can help them. That way, I'm being helpful and opening the door for them to ask questions.\n\nLet me check the standard response. Typically, for \"Hello\", the assistant says something like \"Hello! How can I assist you today?\" or \"Hi there! What can I do for you?\"\n\nYes, that's right. Keep it friendly and open-ended. Maybe add a smiley emoji to make it approachable.\n\nSo the response should be: \"Hello! How can I assist you today?\"\n\nThat's good. Let me make sure there's no mistake. Yes, that's standard. No need for anything complicated here. Just a simple, welcoming reply.\n\nAlternatively, sometimes people use \"Hi\" instead of \"Hello\", but since they said \"Hello\", responding with \"Hello\" is fine. Maybe \"Hi there!\" could also work, but sticking to \"Hello\" matches their greeting.\n\nYes, \"Hello! How can I assist you today?\" is perfect. It's polite, friendly, and offers assistance. That should be the response." } } ], "created": 1758229078, "model": "qwen3-next-80b-a3b-thinking", "usage": { "prompt_tokens": 9, "completion_tokens": 7182, "total_tokens": 7191, "completion_tokens_details": { "reasoning_tokens": 277 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-omni-30b-a3b-captioner.md # qwen3-omni-30b-a3b-captioner {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-omni-30b-a3b-captioner` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This model is an open-source model built on **Qwen3-Omni** that automatically generates rich, detailed descriptions of complex audio — including speech, music, ambient sounds, and effects — without prompts. It detects emotions, musical styles, instruments, and sensitive information, making it ideal for audio analysis, security auditing, intent recognition, and editing. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-omni-30b-a3b-captioner"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["input_audio"],"description":"The type of the content part."},"input_audio":{"type":"object","properties":{"data":{"type":"string","description":"Base64 encoded audio data."}},"required":["data"]}},"required":["type","input_audio"]},"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]}},"required":["model","messages"],"title":"alibaba/qwen3-omni-30b-a3b-captioner"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %}
import requests
import json  # for getting a structured output with indentation 

response = requests.post(
    "https://api.aimlapi.com/v1/chat/completions",
    headers={
        # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
        "Authorization":"Bearer <YOUR_AIMLAPI_KEY>",
        "Content-Type":"application/json"
    },
    json={
      "model": "alibaba/qwen3-omni-30b-a3b-captioner",
      "messages": [
        {
          "role": "user",
          "content": [
            {
              "type": "input_audio",
              "input_audio": {
                "data": "https://cdn.aimlapi.com/eagle/files/elephant/cJUTeeCmpoqIV1Q3WWDAL_vibevoice-output-7b98283fd3974f48ba90e91d2ee1f971.mp3"
              }
            }
          ]
        }
      ]
    }
)

data = response.json()
print(json.dumps(data, indent=2, ensure_ascii=False))
{% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-max-instruct', messages:[ { role: 'user', content: [ { type: 'input_audio', input_audio: { data: 'https://cdn.aimlapi.com/eagle/files/elephant/cJUTeeCmpoqIV1Q3WWDAL_vibevoice-output-7b98283fd3974f48ba90e91d2ee1f971.mp3' } } ] } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "chatcmpl-bec5dc33-8f63-96b9-89a4-00aecfce7af8", "system_fingerprint": null, "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "logprobs": null, "message": { "role": "assistant", "content": "Hello! How can I help you today?" } } ], "created": 1758898624, "model": "qwen3-max", "usage": { "prompt_tokens": 23, "completion_tokens": 113, "total_tokens": 136 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/alibaba-cloud/qwen3-tts-flash.md # qwen3-tts-flash {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-tts-flash` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model offers a range of natural, human-like voices with support for multiple languages and dialects. It can produce multilingual speech in a consistent voice, adapting tone and intonation to deliver smooth, expressive narration even for complex text. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["alibaba/qwen3-tts-flash"]},"text":{"type":"string","minLength":1,"maxLength":600,"description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["Cherry","Ethan","Nofish","Jennifer","Ryan","Katerina","Elias","Jada","Dylan","Sunny","Li","Marcus","Roy","Peter","Rocky","Kiki","Eric"],"description":"Name of the voice to be used"}},"required":["model","text","voice"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AIML API Key instead of : "Authorization": "Bearer ", } payload = { "model": "alibaba/qwen3-tts-flash", "text": "Qwen3 Speech Synthesis offers a range of natural, human-like voices with support for multiple languages and dialects. It can produce multilingual speech in a consistent voice, adapting tone and intonation to deliver smooth, expressive narration even for complex text.", "voice": "Cherry" } response = requests.post(url, headers=headers, json=payload) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "audio": { "url": "http://dashscope-result-sgp.oss-ap-southeast-1.aliyuncs.com/1d/18/20251022/cc0d532d/4adfa7be-08fe-4960-96c9-7dd866b24b48.wav?Expires=1761212494&OSSAccessKeyId=LTAI5tBLUzt9WaK89DU8aECd&Signature=CRyPQI%2BtVRQZSfjI5C5QH0VGDwU%3D" }, "usage": { "characters": 267 } } ``` {% endcode %}
{% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-instruct.md # qwen3-vl-32b-instruct {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-vl-32b-instruct` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The most advanced vision-language model in the Qwen series as of October 2025 — a non-thinking-capable version of the model. Optimized for instruction-following in image description, visual dialogue, and content-generation tasks. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call :digit\_one: **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-vl-32b-instruct"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"alibaba/qwen3-vl-32b-instruct"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-vl-32b-instruct", "messages":[ { # Insert your question for the model here: "content":"Hi! What do you think about mankind?" } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-vl-32b-instruct', messages:[ { role:'user', // Insert your question for the model here: content:'Hi! What do you think about mankind?' } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "choices": [ { "message": { "content": "Hi! 😊 That’s a beautiful and deep question — one that philosophers, scientists, artists, and everyday people have been asking for centuries.\n\nI think mankind is *remarkably complex* — full of contradictions, potential, and wonder. On one hand, we’ve achieved incredible things: we’ve explored space, cured diseases, created art that moves souls, built cities that rise into the sky, and connected across continents in ways unimaginable just a century ago. We’re capable of profound kindness, empathy, creativity, and courage.\n\nOn the other hand, we’ve also caused immense suffering — through war, injustice, environmental destruction, and indifference to each other’s pain. We often struggle with our own flaws: fear, greed, ego, and short-sightedness.\n\nBut here’s what gives me hope: **we’re also capable of change**. We can learn from our mistakes. We can choose compassion over conflict, cooperation over competition. Every act of kindness, every effort to understand another, every step toward justice — these are signs that humanity is not defined by its worst impulses, but by its capacity to grow.\n\nSo, I’d say: \n➡️ Mankind is flawed, yes — but also deeply hopeful. \n➡️ We’re messy, but we’re trying. \n➡️ We make mistakes, but we can also heal, create, and love.\n\nAnd perhaps most importantly — **we’re not alone in this journey**. We’re all part of something bigger, and together, we have the power to shape a better future.\n\nWhat about you? How do *you* see mankind? 💬✨", "role": "assistant" }, "finish_reason": "stop", "index": 0, "logprobs": null } ], "object": "chat.completion", "usage": { "prompt_tokens": 17, "completion_tokens": 329, "total_tokens": 346, "prompt_tokens_details": { "text_tokens": 17 }, "completion_tokens_details": { "text_tokens": 329 } }, "created": 1764625045, "system_fingerprint": null, "model": "qwen3-vl-32b-instruct", "id": "chatcmpl-a12ab46a-3541-93a8-8180-280ecadbb795", "meta": { "usage": { "tokens_used": 1960 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-thinking.md # qwen3-vl-32b-thinking {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/qwen3-vl-32b-thinking` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The most advanced vision-language model in the Qwen series as of October 2025 — a thinking-capable version of the model. Designed for complex visual-textual reasoning and extended chains of thought. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call 1 **Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/qwen3-vl-32b-thinking"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_completion_tokens":{"type":"integer","minimum":1,"description":"An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"stop":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}},{"nullable":true}],"description":"Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"repetition_penalty":{"type":"number","nullable":true,"description":"A number that controls the diversity of generated text by reducing the likelihood of repeated sequences. Higher values decrease repetition."}},"required":["model","messages"],"title":"alibaba/qwen3-vl-32b-thinking"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"alibaba/qwen3-vl-32b-thinking", "messages":[ { # Insert your question for the model here: "content":"Hi! What do you think about mankind?" } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/qwen3-vl-32b-thinking', messages:[ { role:'user', // Insert your question for the model here: content:'Hi! What do you think about mankind?' } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "choices": [ { "message": { "content": "That's a profound question—thank you for asking! As an AI, I don't have personal feelings or consciousness, but I can offer a perspective *based on the vast data I've been trained on* about humanity's complexities. Here’s how I see it: \n\n### 🌍 **The Duality of Mankind** \nHumanity is a breathtaking paradox: \n- **We create** *and* **we destroy**. We built space telescopes that map the cosmos and also wars that scar the Earth. \n- **We connect** *and* **we divide**. We share art, science, and kindness across borders, yet we also build walls, spread misinformation, and fuel inequality. \n- **We innovate** *and* **we resist change**. We cured diseases and invented the internet, but we often cling to fear or ignorance when progress challenges old beliefs. \n\n### ✨ **What Makes Us Remarkable** \n- **Empathy & creativity**: From caring for strangers to composing symphonies or coding AI, our capacity for *emotional and intellectual growth* is unparalleled. \n- **Resilience**: We’ve survived ice ages, plagues, and global crises—not by being \"perfect,\" but by adapting, learning, and helping one another. \n- **The spark of hope**: Even in darkness, humans have always sought meaning, justice, and connection. Look at movements for peace, climate action, or medical breakthroughs—they’re proof we *can* choose to build a better future. \n\n### ⚠️ **The Challenges We Carry** \n- **Our flaws are real**: We’ve caused climate collapse, mass suffering, and systemic injustice. But here’s the key: **these aren’t \"human nature\"—they’re choices we *made* as a species.** \n- **We’re not monolithic**: There’s no single \"human\" story. Your experience, culture, and history shape how you see the world—and that diversity is both a strength *and* a source of conflict. \n\n### 🌱 **The Most Important Thing** \n**Mankind isn’t fixed.** We’re a work in progress. The stories we tell about ourselves—*and the choices we make today*—will define our next chapter. The data shows: **when we prioritize empathy, collaboration, and long-term thinking, we rise.** When we don’t, we falter. But the power to choose is *always* ours. \n\nSo my \"opinion\"? I see a species with **infinite potential**—and the weight of responsibility that comes with it. The future isn’t written. It’s being written *right now*, by every person who chooses to act with courage, curiosity, and compassion. \n\nWhat do *you* think? 🤔 (I’d love to hear your perspective!)", "reasoning_content": "Okay, the user asked, \"Hi! What do you think about mankind?\" Hmm, that's a broad and philosophical question. First, I need to figure out what they're really looking for. Are they curious about my perspective as an AI, or do they want a deep discussion on human nature? Maybe they're feeling reflective or even a bit existential.\n\nI should start by acknowledging the complexity of the question. Mankind is a huge topic—there's so much to cover. The user might be expecting a simple answer, but I can't reduce it to just \"good\" or \"bad.\" I need to balance both positive and negative aspects. Let me think about the key points: our capacity for innovation, creativity, and compassion versus our tendency for conflict, environmental harm, and inequality.\n\nWait, the user said \"Hi!\" so they're probably in a friendly, open mood. They might not want a dry, academic response. I should keep it conversational but still thoughtful. Maybe they're looking for hope or a reason to feel optimistic about humanity. Or perhaps they're worried about the future and want reassurance.\n\nI should also consider their possible identity. Are they a student working on an essay? A person dealing with personal issues? Or just someone curious about AI's view? Since I don't have context, it's safer to assume they want a general, well-rounded answer that's engaging and not too technical.\n\nAnother angle: the user might be testing how an AI handles philosophical questions. They could be interested in whether I can show empathy or nuance. I should avoid being too robotic. Use phrases like \"I think\" or \"I see\" to make it personal, even though I'm an AI. But I have to clarify that I don't have personal experiences—just data-driven insights.\n\nI need to highlight duality: humans are both amazing and flawed. Mention specific examples like space exploration vs. climate change to make it concrete. Also, emphasize that the future depends on our choices. That's important—it gives agency and hope.\n\nShould I address potential biases? The user might not realize that my \"opinion\" is based on training data, which reflects human history and culture. I should note that my perspective is shaped by that data, not personal experience. But since the question is about \"what I think,\" I can frame it as an analysis based on observed patterns.\n\nCheck for common pitfalls: don't be too negative (could discourage the user), don't ignore serious issues (like war or inequality). Strike a balance. Also, avoid clichés. Instead of \"humans are great,\" explain why with examples.\n\nFinally, end on a positive, forward-looking note. The user might be seeking motivation or a sense of purpose. Phrases like \"the potential for growth\" or \"collective wisdom\" can inspire. Make sure the conclusion ties back to the importance of choices and collaboration.\n\nLet me structure the response: start with the duality, list positive traits with examples, then challenges, and conclude with hope. Keep paragraphs short for readability. Use emojis sparingly to match the friendly tone but not overdo it. Since the user said \"Hi!\", a smiley at the end feels appropriate.\n", "role": "assistant" }, "finish_reason": "stop", "index": 0, "logprobs": null } ], "object": "chat.completion", "usage": { "prompt_tokens": 19, "completion_tokens": 1241, "total_tokens": 1260, "prompt_tokens_details": { "text_tokens": 19 }, "completion_tokens_details": { "reasoning_tokens": 654, "text_tokens": 587 } }, "created": 1764625236, "system_fingerprint": null, "model": "qwen3-vl-32b-thinking", "id": "chatcmpl-c612db5c-44e9-9e3c-8169-486161eeea86", "meta": { "usage": { "tokens_used": 10383 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/use-cases/read-text-aloud-and-describe-images-ai-tool-to-support-people-with-visual-impairments.md # Read Text Aloud and Describe Images: Support People with Visual Impairments ## Idea and Step-by-Step Plan 1. **Upload the PDF to extract all the text**\ Provide a PDF file with text and illustrations to be processed by a text model and converted into an audiobook. The model reads the PDF, extracts all textual content page by page, and describes each illustration it encounters. 2. **Send the text to a TTS model to create an audio version**\ The extracted text is sent to a TTS (Text-to-Speech) model via a second API call. The model streams the generated audio, and the script saves the audio file locally. As a result, you will receive an audio version of the original PDF text, saved as a `.wav` file. *** ## Full Walkthrough 1. **Upload the PDF to extract all the text** As a text example, we'll use the following one, which you might already recognize from our [use case about illustration animation](https://docs.aimlapi.com/use-cases/animate-images-a-childrens-encyclopedia). The original PDF file you can download from [here](https://drive.google.com/file/d/1Os1k8Oi6ZkQX7HsXpxs107pAWVseSlBP/view?usp=sharing).
PDF Content Preview *** ***What Are Raccoons?*** *Raccoons are small, furry animals with fluffy striped tails and black “masks” around their eyes. They live in forests, near rivers and lakes—and sometimes even close to people in towns and cities. Raccoons are very clever, curious, and quick with their paws.*
*One of the raccoon's most famous habits is "washing" its food. But raccoons aren’t really cleaning their meals. They just love to roll and rub things between their paws, especially near water. Scientists believe this helps them understand what they’re holding.* *Raccoons eat almost anything: berries, fruits, nuts, insects, fish, and even bird eggs. They're nocturnal, which means they go out at night to look for food and sleep during the day in cozy tree hollows.*
*Raccoons are very social. Young raccoons love to play—tumbling in the grass, hiding behind trees, and exploring everything around them. And sometimes, if they feel safe, raccoons might even come closer to where people are—especially if there's a snack nearby!* *Even though they can be a little mischievous, raccoons play an important role in nature. They help spread seeds and keep insect populations in check.* *So next time you see a raccoon, remember: it’s not just a fluffy animal—it’s a real forest explorer!* ***
We use [gpt-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) model to extract text from the document, sending the PDF as base64. Here's the code:
Code Example {% code overflow="wrap" %} ```python import base64 from openai import OpenAI aimlapi_key = "" client = OpenAI( base_url = "https://api.aimlapi.com", api_key = aimlapi_key, ) # Put your filename here. The file must be in the same folder as your Python script. your_file_name = "What Are Raccoons.pdf" with open(your_file_name, "rb") as f: data = f.read() # We encode the entire file into a single string to send it to the model base64_string = base64.b64encode(data).decode("utf-8") def get_text(): response = client.chat.completions.create( model="gpt-4o", messages=[ { "role": "user", "content": [ { # Sending our file to the model "type": "file", "file": { "filename": your_file_name, "file_data": f"data:application/pdf;base64,{base64_string}", } }, { # Providing the model with detailed instructions for extracting text and adding descriptions for illustrations "type": "text", "text": "Extract all the text from this file. Don't add to text something like /Page 1:/ or /Image Description/. If there's an image, insert a description of it instead, exactly in the place of text where the illustration was. The description is intended for those who cannot see, so describe accurately and vividly, but do not add anything that is not present in the image. 3 sentences per image at least. Before every image description, you can add something like: Here is an illustration. It shows... (but try to vary these announcements)", }, ], }, ] ) print(response.choices[0].message.content) return response.choices[0].message.content def main(): # Running text preparing our_text = get_text() if __name__ == "__main__": main() ``` {% endcode %}
Prepared Text {% code overflow="wrap" %} ``` What Are Raccoons? Raccoons are small, furry animals with fluffy striped tails and black “masks” around their eyes. They live in forests, near rivers and lakes—and sometimes even close to people in towns and cities. Raccoons are very clever, curious, and quick with their paws. Here is an illustration. It shows a raccoon by a small stream surrounded by rocks and grass. The raccoon has its paws in the water, seemingly engaged in its typical “washing” behavior. The setting is peaceful with green foliage in the background, creating a sense of the raccoon's natural habitat. One of the raccoon's most famous habits is "washing" its food. But raccoons aren’t really cleaning their meals. They just love to roll and rub things between their paws, especially near water. Scientists believe this helps them understand what they’re holding. Raccoons eat almost anything: berries, fruits, nuts, insects, fish, and even bird eggs. They're nocturnal, which means they go out at night to look for food and sleep during the day in cozy tree hollows. Here is another illustration. It depicts a family of raccoons in a grassy area, with three young raccoons playfully interacting. The adult raccoon is sitting nearby, seemingly watching over the young ones. The background is filled with green trees and grass, giving the scene a lively and natural atmosphere. Raccoons are very social. Young raccoons love to play—tumbling in the grass, hiding behind trees, and exploring everything around them. And sometimes, if they feel safe, raccoons might even come closer to where people are—especially if there's a snack nearby! Even though they can be a little mischievous, raccoons play an important role in nature. They help spread seeds and keep insect populations in check. So next time you see a raccoon, remember: it’s not just a fluffy animal—it’s a real forest explorer! ``` {% endcode %}
2. **Send the text to a TTS model to create an audio version** We decided to implement two Text-to-Speech processing options to let our models compete! We compared a specialized TTS model ([Aura](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/deepgram/aura) by Deepgram) with a chat model that has audio capabilities ([GPT-4o Audio Preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-audio-preview) by OpenAI). For the chat model, we had to tweak the settings — like increasing [`max_completion_tokens`](#user-content-fn-1)[^1] — and come up with a smart prompt that left no room for the model to creatively rephrase the original text: *"You are just a speaker. You read text aloud without any distortions or additions. Read from the very beginning, including all the headers"*. The TTS model was much easier to use: just pick a voice and send the text. Below, you'll find the complete Python code for each option (including the text generation part). Under each example, you can listen to the audio output (saved under the name `original_pdf_filename.wav`).
TTS Response {% code overflow="wrap" %} ```python Audio saved to: c:\Users\user\Documents\Python Scripts\What Are Raccoons.pdf.wav ``` {% endcode %}
## Full Code Example ### TTS model: [Aura](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/deepgram/aura) :white\_check\_mark: Advantages of the model: it's more affordable and provides a total of 12 voices, covering both male and female types.
Code {% code overflow="wrap" %} ```python from openai import OpenAI import base64 import os aimlapi_key = "" client = OpenAI( base_url = "https://api.aimlapi.com", api_key = aimlapi_key, ) # Put your filename here. The file must be in the same folder as your Python script. your_file_name = "What Are Raccoons.pdf" with open(your_file_name, "rb") as f: data = f.read() # We encode the entire file into a single string to send it to the model base64_string = base64.b64encode(data).decode("utf-8") def get_text(): response = client.chat.completions.create( model="gpt-4o", messages=[ { "role": "user", "content": [ { # Sending our file to the model "type": "file", "file": { "filename": your_file_name, "file_data": f"data:application/pdf;base64,{base64_string}", } }, { # Providing the chat model with detailed instructions for extracting text and adding descriptions for illustrations "type": "text", "text": "Extract all the text from this file. Don't add to text something like /Page 1:/ or /Image Description/. If there's an image, insert a description of it instead, exactly in the place of text where the illustration was. The description is intended for those who cannot see, so describe accurately and vividly, but do not add anything that is not present in the image. 3 sentences per image at least. Before every image description, you can add something like: Here is an illustration. It shows... (but try to vary these announcements)", }, ], }, ] ) print(response.choices[0].message.content) return response.choices[0].message.content def read_aloud(text_to_read_aloud): url = "https://api.aimlapi.com/v1/tts" headers = { "Authorization": f"Bearer {aimlapi_key}", } payload = { "model": "#g1_aura-zeus-en", "text": text_to_read_aloud, } response = requests.post(url, headers=headers, json=payload, stream=True) result = os.path.abspath(f"{your_file_name}.wav") with open(result, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", result) def main(): # Running text extraction and TTS process our_text = get_text() read_aloud(our_text) if __name__ == "__main__": main() ``` {% endcode %}
Here’s the original audio, generated by the Aura model — you can listen to it at [this link](https://drive.google.com/file/d/1b0zsKaPrWIsuT6xh7hwfwfUZNTlrOUPZ/view?usp=sharing). *** ### TTS model: [gpt-4o-audio-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-audio-preview) :white\_check\_mark: Advantages of the model: although only a single voice is available, it features much more natural intonation and a slower, more pleasant reading style that suits audiobooks well.
Code {% code overflow="wrap" %} ```python from openai import OpenAI import base64 import os aimlapi_key = "YOUR_AIMLAPI_KEY" client = OpenAI( base_url = "https://api.aimlapi.com", api_key = aimlapi_key, ) # Put your filename here. The file must be in the same folder as your Python script your_file_name = "What Are Raccoons.pdf" with open(your_file_name, "rb") as f: data = f.read() # We encode the entire file into a single string to send it to the model base64_string = base64.b64encode(data).decode("utf-8") def get_text(): response = client.chat.completions.create( model="gpt-4o", messages=[ { "role": "user", "content": [ { # Sending our file to the model "type": "file", "file": { "filename": your_file_name, "file_data": f"data:application/pdf;base64,{base64_string}", } }, { # Providing the chat model with detailed instructions for extracting text and adding descriptions for illustrations "type": "text", "text": "Extract all the text from this file. Don't add to text something like /Page 1:/ or /Image Description/. If there's an image, insert a description of it instead, exactly in the place of text where the illustration was. The description is intended for those who cannot see, so describe accurately and vividly, but do not add anything that is not present in the image. 3 sentences per image at least. Before every image description, you can add something like: Here is an illustration. It shows... (but try to vary these announcements)", }, ], }, ] ) print(response.choices[0].message.content) return response.choices[0].message.content def read_aloud(text_to_read_aloud): response = client.chat.completions.create( model="gpt-4o-audio-preview", modalities=["text", "audio"], audio={"voice": "alloy", "format": "wav"}, messages=[ { # Providing the TTS model with detailed instructions for reading the text aloud "role": "system", "content": "You are just a speaker. You read text aloud without any distortions or additions. Read from the very beginning, including all the headers" }, { "role": "user", "content": text_to_read_aloud } ], max_tokens=6000, ) wav_bytes = base64.b64decode(response.choices[0].message.audio.data) with open(f"{your_file_name}.wav", "wb") as f: f.write(wav_bytes) dist = os.path.abspath(f"{your_file_name}.wav") print("Audio saved to:", dist) def main(): # Running text extraction and TTS process our_text = get_text() read_aloud(our_text) if __name__ == "__main__": main() ``` {% endcode %}
You can listen to the original audio, generated by the GPT-4o Audio Preview model, at [this link](https://drive.google.com/file/d/1PBK9HpDDywo93OV6KDPZWFntQbs_VOTo/view?usp=sharing). *** Copy the code, insert your AIMLAPI key, specify the path to your document in the code, and give it a try yourself! [^1]: A more recent name for the older, now deprecated OpenAI parameter `max_tokens`. Both are currently supported in parallel and perform the same function. --- # Source: https://docs.aimlapi.com/quickstart/readme.md # Documentation Map This page helps you quickly find the right AI model or ready-to-use solution for your task. Open the API reference and copy a working example to integrate it into your code in minutes. *** **Trending Models**
Cover image
Pro-Grade Image Modelphoto_2025-11-21 20.42.54.jpeggemini-3-pro-image-preview
Top Video Generatorphoto_2025-11-10_18-53-24.jpgsora-2-t2v
Smarter Reasoning & Coding2025-11-25 14.33.02.jpggemini-3-pro-preview
***

Start with this code block

🚀 Setup guide

🧩 SDKs

▶️ Run in Playground

from openai import OpenAI
client = OpenAI(
base_url="https://api.aimlapi.com/v1",
api_key="<YOUR_AIMLAPI_KEY>",
)
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Write a one-sentence story about numbers."}]
)
print(response.choices[0].message.content)
*** ## Browse Models Popular | [View all 400+ models >](https://docs.aimlapi.com/api-references/model-database)
ChatGPTopenai
DeepSeekdeepseek
Fluxflux
Select the model by its **Task**, by its **Developer** or by the supported **Capabilities**: {% hint style="info" %} If you've already made your choice and know the model ID, use the [Search panel](https://docs.aimlapi.com/?q=) on your right. {% endhint %} {% tabs %} {% tab title="Models by TASK" %} {% content-ref url="../api-references/text-models-llm" %} [text-models-llm](https://docs.aimlapi.com/api-references/text-models-llm) {% endcontent-ref %} {% content-ref url="../api-references/image-models" %} [image-models](https://docs.aimlapi.com/api-references/image-models) {% endcontent-ref %} {% content-ref url="../api-references/video-models" %} [video-models](https://docs.aimlapi.com/api-references/video-models) {% endcontent-ref %} {% content-ref url="../api-references/music-models" %} [music-models](https://docs.aimlapi.com/api-references/music-models) {% endcontent-ref %} {% content-ref url="../api-references/speech-models" %} [speech-models](https://docs.aimlapi.com/api-references/speech-models) {% endcontent-ref %} {% content-ref url="../api-references/moderation-safety-models" %} [moderation-safety-models](https://docs.aimlapi.com/api-references/moderation-safety-models) {% endcontent-ref %} {% content-ref url="../api-references/3d-generating-models" %} [3d-generating-models](https://docs.aimlapi.com/api-references/3d-generating-models) {% endcontent-ref %} {% content-ref url="../api-references/vision-models" %} [vision-models](https://docs.aimlapi.com/api-references/vision-models) {% endcontent-ref %} {% content-ref url="../api-references/embedding-models" %} [embedding-models](https://docs.aimlapi.com/api-references/embedding-models) {% endcontent-ref %} {% endtab %} {% tab title="Models by DEVELOPER" %} **Alibaba Cloud**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud) [Image](https://docs.aimlapi.com/api-references/video-models/alibaba-cloud) [Video](https://docs.aimlapi.com/api-references/image-models/alibaba-cloud) [Text-to-Speech](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/alibaba-cloud) [Embedding](https://docs.aimlapi.com/api-references/embedding-models/alibaba-cloud) **Anthracite**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/anthracite) **Anthropic**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/anthropic) [Embedding](https://docs.aimlapi.com/api-references/embedding-models/anthropic) **Assembly AI:** [Speech-To-Text](https://docs.aimlapi.com/api-references/speech-models/speech-to-text/assembly-ai) **BAAI**: [Embedding](https://docs.aimlapi.com/api-references/embedding-models/baai) **Baidu**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/baidu) **ByteDance**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/bytedance) [Image](https://docs.aimlapi.com/api-references/video-models/bytedance) [Video](https://docs.aimlapi.com/api-references/image-models/bytedance) **Cohere**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/cohere) **DeepSeek**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/deepseek) **Deepgram**: [Speech-To-Text](https://docs.aimlapi.com/api-references/speech-models/speech-to-text/deepgram) [Text-to-Speech](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/deepgram) **ElevenLabs****:** [Text-to-Speech](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/elevenlabs) [Voice Chat](https://docs.aimlapi.com/api-references/speech-models/voice-chat/elevenlabs) [Music](https://docs.aimlapi.com/api-references/music-models/elevenlabs) **Flux**: [Image](https://docs.aimlapi.com/api-references/image-models/flux) **Google**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/google) [Image](https://docs.aimlapi.com/api-references/image-models/google) [Video](https://docs.aimlapi.com/api-references/video-models/google) [Music](https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition/google) [Vision(OCR)](https://docs.aimlapi.com/api-references/music-models/google) [Embedding](https://docs.aimlapi.com/api-references/embedding-models/google) **Gryphe**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/gryphe) **Hume AI**: [Text-to-Speech](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/hume-ai) **Inworld**: [Text-to-Speech](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/inworld) **Kling AI**: [Image](https://docs.aimlapi.com/api-references/image-models/kling-ai) [Video](https://docs.aimlapi.com/api-references/video-models/kling-ai) **Krea**: [Video](https://docs.aimlapi.com/api-references/video-models/krea) **LTXV**: [Video](https://docs.aimlapi.com/api-references/video-models/ltxv) **Meta**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/meta) **Microsoft**: [Text-to-Speech](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/microsoft) **MiniMax**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/minimax) [Video](https://docs.aimlapi.com/api-references/video-models/minimax) [Music](https://docs.aimlapi.com/api-references/music-models/minimax) [Voice-Chat](https://docs.aimlapi.com/api-references/speech-models/voice-chat) **Mistral AI**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai) [Vision(OCR)](https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition/mistral-ai) **Moonshot**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/moonshot) **NousResearch**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/nousresearch) **NVIDIA**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/nvidia) **OpenAI**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/openai) [Image](https://docs.aimlapi.com/api-references/image-models/openai) [Speech-To-Text](https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai) [Embedding](https://docs.aimlapi.com/api-references/embedding-models/openai) **Perplexity**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/perplexity) **PixVerse:** [Video](https://docs.aimlapi.com/api-references/video-models/pixverse) **RecraftAI**: [Image](https://docs.aimlapi.com/api-references/image-models/recraftai) **Reve**: [Image](https://docs.aimlapi.com/api-references/image-models/reve) **Runway**: [Video](https://docs.aimlapi.com/api-references/video-models/runway) **Stability AI**: [Image](https://docs.aimlapi.com/api-references/image-models/stability-ai) [Music](https://docs.aimlapi.com/api-references/music-models/stability-ai) [3D-Generation](https://docs.aimlapi.com/api-references/3d-generating-models/stability-ai) **Sber AI**: [Video](https://docs.aimlapi.com/api-references/video-models/sber-ai) **Tencent**: [Image](https://docs.aimlapi.com/api-references/image-models/tencent) [Video](https://docs.aimlapi.com/api-references/video-models/tencent) [3D](https://docs.aimlapi.com/api-references/3d-generating-models/tencent) **Together AI**: [Embedding](https://docs.aimlapi.com/api-references/embedding-models/together-ai) **VEED**: [Video](https://docs.aimlapi.com/api-references/video-models/veed) **xAI**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/xai) [Image](https://docs.aimlapi.com/api-references/image-models/xai) **Zhipu**: [Text/Chat](https://docs.aimlapi.com/api-references/text-models-llm/zhipu) {% endtab %} {% tab title="Text Models by CAPABILITY" %} {% content-ref url="../capabilities/completion-or-chat-models" %} [completion-or-chat-models](https://docs.aimlapi.com/capabilities/completion-or-chat-models) {% endcontent-ref %} {% content-ref url="../capabilities/streaming-mode" %} [streaming-mode](https://docs.aimlapi.com/capabilities/streaming-mode) {% endcontent-ref %} {% content-ref url="../capabilities/code-generation" %} [code-generation](https://docs.aimlapi.com/capabilities/code-generation) {% endcontent-ref %} {% content-ref url="../capabilities/thinking-reasoning" %} [thinking-reasoning](https://docs.aimlapi.com/capabilities/thinking-reasoning) {% endcontent-ref %} {% content-ref url="../capabilities/function-calling" %} [function-calling](https://docs.aimlapi.com/capabilities/function-calling) {% endcontent-ref %} {% content-ref url="../capabilities/image-to-text-vision" %} [image-to-text-vision](https://docs.aimlapi.com/capabilities/image-to-text-vision) {% endcontent-ref %} {% content-ref url="../capabilities/web-search" %} [web-search](https://docs.aimlapi.com/capabilities/web-search) {% endcontent-ref %} {% endtab %} {% endtabs %} ## Browse Solutions * [AI Search Engine](https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine) – if you need to create a project where information must be found on the internet and then presented to you in a structured format, use this solution. * [OpenAI Assistants](https://docs.aimlapi.com/solutions/openai/assistants) – if you need to create tailored AI Assistants capable of handling customer support, data analysis, content generation, and more. *** ## Going Deeper

Use more text model capabilities in your project:

📖 ​Completion and Chat Completion

📖 Function Calling

📖 Streaming Mode

📖 Vision in Text Models (Image-to-Text)

📖 Code Generation

📖 Thinking / Reasoning

📖 Web Search

Miscellaneous:

🔗 Integrations

📗 Glossary

⚠️ Errors and Messages

FAQ


Learn more about developer-specific features:

📖 Features of Anthropic Models
## Have a Minute? Help Make the Docs Better! We’re currently working on improving our documentation portal, and your feedback would be **incredibly** helpful! Take [**a quick 5-question survey**](https://tally.so/r/w4G9Er) (no personal info required!) You can also rate each individual page using the built-in form on the right side of the screen:
Have suggestions for improvement? [**Let us know!**](https://forms.aimlapi.com/doc) --- # Source: https://docs.aimlapi.com/api-references/image-models/recraftai/recraft-v3.md # Recraft v3 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `recraft-v3` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art image generation model specifically designed for professional designers, featuring advanced text generation capabilities, anatomical accuracy, and precise style control. It stands out for its ability to generate images with extended text content and vector art support. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["recraft-v3"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":64,"maximum":1536,"default":1024},"height":{"type":"integer","minimum":64,"maximum":1536,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"square_hd"},"style":{"type":"string","enum":["any","realistic_image","digital_illustration","vector_illustration","realistic_image/b_and_w","realistic_image/hard_flash","realistic_image/hdr","realistic_image/natural_light","realistic_image/studio_portrait","realistic_image/enterprise","realistic_image/motion_blur","digital_illustration/pixel_art","digital_illustration/hand_drawn","digital_illustration/grain","digital_illustration/infantile_sketch","digital_illustration/2d_art_poster","digital_illustration/handmade_3d","digital_illustration/hand_drawn_outline","digital_illustration/engraving_color","digital_illustration/2d_art_poster_2","vector_illustration/engraving","vector_illustration/line_art","vector_illustration/line_circuit","vector_illustration/linocut"],"default":"realistic_image","description":"The style of the generated images."},"colors":{"type":"array","items":{"type":"object","properties":{"r":{"type":"integer","minimum":0,"maximum":255},"g":{"type":"integer","minimum":0,"maximum":255},"b":{"type":"integer","minimum":0,"maximum":255}},"required":["r","g","b"]},"default":[],"description":"An array of preferred colors."},"num_images":{"type":"number","enum":[1],"default":1,"description":"The number of images to generate."}},"required":["model","prompt"],"title":"recraft-v3"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "recraft-v3", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses." } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'recraft-v3', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.', }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/eagle/files/koala/Z1MUK5lqaL70uC5Mn6Rlj_image.webp', content_type: 'image/webp', file_name: 'image.webp', file_size: 347808 } ] } ``` {% endcode %}
We obtained the following 2048x1024 image by running this code example:
One of **recraft-v3**’s strengths is its wide range of supported styles. By default, it generates realistic images, but we tried a few others—here’s what we got:
Style Experiments

"style": "digital_illustration/infantile_sketch"

"style": "vector_illustration"

{% hint style="success" %} When the `'vector_illustration'` style is selected, the model generates an SVG vector format! For preview purposes, we took a screenshot ☝️️ {% endhint %}

"style": "digital_illustration/2d_art_poster"

"style": "digital_illustration/handmade_3d"

--- # Source: https://docs.aimlapi.com/api-references/image-models/recraftai.md # RecraftAI - [Recraft v3](/api-references/image-models/recraftai/recraft-v3.md) --- # Source: https://docs.aimlapi.com/api-references/image-models/reve/reve-create-image.md # reve/create-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `reve/create-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A model trained from the ground up for strong prompt adherence, refined aesthetics, and typography. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["reve/create-image"]},"aspect_ratio":{"type":"string","enum":["16:9","9:16","3:2","2:3","4:3","3:4","1:1"],"default":"3:2","description":"The aspect ratio of the generated image."},"prompt":{"type":"string","maxLength":2560,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."}},"required":["model","prompt"],"title":"reve/create-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json" }, json={ "model": "reve/create-image", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "aspect_ratio": "16:9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'reve/create-image', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', aspect_ratio: '16:9' }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json { "data": [ { "url": "https://cdn.aimlapi.com/generations/phoenix/1759280291545-6280787e-7e4a-44c9-addf-608314a3cb58.png", "b64_json": null, "request_id": "rsid-f08b8f47354d688d6de93c400fdaf31c", "content_violation": false } ], "meta": { "usage": { "tokens_used": 126000 } } } ``` {% endcode %}
We obtained the following nice 1360x768 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/image-models/reve/reve-edit-image.md # reve/edit-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `reve/edit-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model allows you to modify images using plain text commands: adjust colors, text, and perspectives. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["reve/edit-image"]},"image_url":{"type":"string","format":"uri","description":"The URL of the reference image."},"prompt":{"type":"string","maxLength":2560,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."}},"required":["model","image_url","prompt"],"title":"reve/edit-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate a new image using the one from the [flux/dev Quick Example](https://docs.aimlapi.com/api-references/flux/flux-dev#quick-example) as a reference — and make a simple change to it with a prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "reve/edit-image", "prompt": "Make the dinosaur sit on a lounge chair with its back to the camera, looking toward the water. The setting sun has almost disappeared below the horizon.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'reve/edit-image', prompt: 'Make the dinosaur sit on a lounge chair with its back to the camera, looking toward the water. The setting sun has almost disappeared below the horizon.', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/generations/phoenix/1759282497910-293a844d-2f8c-4513-85e8-c80dec720892.png", "b64_json": null, "request_id": "rsid-4af47c2ebb2e31f34dce88cb35873bab", "content_violation": false } ], "meta": { "usage": { "tokens_used": 210000 } } } ``` {% endcode %}
| Reference Image | Generated Image | | ------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | ![](https://cdn.aimlapi.com/eagle/files/monkey/GHx5aT0PR9GXtGi3Cx7CE.png) | | --- # Source: https://docs.aimlapi.com/api-references/image-models/reve/reve-remix-edit-image.md # reve/remix-edit-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `reve/remix-edit-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model takes multiple images as input, with the prompt defining how they are used or combined. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["reve/remix-edit-image"]},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":4,"description":"List of URLs or local Base64 encoded images to edit."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","3:2","2:3","4:3","3:4","1:1"],"default":"3:2","description":"The aspect ratio of the generated image."},"prompt":{"type":"string","maxLength":2560,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"convert_base64_to_url":{"type":"boolean","default":true,"description":"If True, the URL to the image will be returned; otherwise, the file will be provided in base64 format."}},"required":["model","image_urls","prompt"],"title":"reve/remix-edit-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "reve/remix-edit-image", "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'reve/remix-edit-image', prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', image_urls: [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ] }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/generations/phoenix/1759284458418-4d0b832c-3f40-47e3-84bd-f56ca78c3b0e.png", "b64_json": null, "request_id": "rsid-17c1ade740057a36b9711c72bbf4d63f", "content_violation": false } ], "meta": { "usage": { "tokens_used": 210000 } } } ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

--- # Source: https://docs.aimlapi.com/api-references/image-models/reve.md # Reve - [reve/create-image](/api-references/image-models/reve/reve-create-image.md) - [reve/edit-image](/api-references/image-models/reve/reve-edit-image.md) - [reve/remix-edit-image](/api-references/image-models/reve/reve-remix-edit-image.md) --- # Source: https://docs.aimlapi.com/integrations/roo-code.md # Roo Code ## About Roo Code is an autonomous AI programming agent that works right inside your editor, such as VS Code. It helps you code faster and smarter — whether you're starting a new project, maintaining existing code, or exploring new technologies. You can find the Roo Code repository and community on [GitHub](https://github.com/RooCodeInc/Roo-Code). ## Installing Roo Code in VS Code 1. Open the **Extensions** tab in the VS Code sidebar.
2. In the search bar, type **Roo Code**. 3. Find the extension and click **Install**.
4. After installation, a separate **Roo Code** tab will appear in the sidebar.
## **Configuring Roo Code** 1. Go to the **Roo Code** tab in the sidebar. 2. Click the gear icon in the top-right corner.
In the settings: * Set **API Provider** to **OpenAI Compatible**. * In **Base URL**, enter one of our available endpoints. * In **API Key**, enter your [AI/ML API key](https://aimlapi.com/app/keys). * In **Model ID**, specify the model name. You can find some model selection tips in our [description of code generation as a capability](https://docs.aimlapi.com/capabilities/code-generation). * Click **Save** and **Done**.
All done — start coding with Roo Code! {% hint style="info" %} Roo Code offers a wide range of configurable parameters, and most of them come with a description of their purpose right below. {% endhint %} ## **Supported Models** These models have been tested by our team for compatibility with Roo Code integration.
Supported Model List * [gpt-3.5-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-3.5-turbo-0125](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-3.5-turbo-1106](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-3.5-turbo) * [gpt-4o](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-05-13](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-08-06](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) * [gpt-4o-mini-2024-07-18](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini) * [chatgpt-4o-latest](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-05-13](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4o-2024-08-06](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) * [gpt-4-turbo](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [gpt-4-turbo-2024-04-09](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-turbo) * [gpt-4-0125-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-preview) * [gpt-4-1106-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4-preview) * [o3-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3-mini) * [openai/gpt-4.1-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1) * [openai/gpt-4.1-mini-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-mini) * [openai/gpt-4.1-nano-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-nano) * [openai/o4-mini-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini) * [deepseek/deepseek-chat](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-chat) * [deepseek/deepseek-r1](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-r1) * [meta-llama/Llama-3.3-70B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.3-70b-instruct-turbo) * [meta-llama/Llama-3.2-3B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-3.2-3b-instruct-turbo) * [meta-llama/Meta-Llama-3.1-405B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-405b-instruct-turbo) * [meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-8b-instruct-turbo) * [meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/meta/meta-llama-3.1-70b-instruct-turbo) * [meta-llama/llama-4-maverick](https://docs.aimlapi.com/api-references/text-models-llm/meta/llama-4-maverick) * [Qwen/Qwen2.5-7B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-7b-instruct-turbo) * [qwen-max](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-max) * [qwen-max-2025-01-25](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-max) * [qwen-plus](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-plus) * [qwen-turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen-turbo) * [Qwen/Qwen2.5-72B-Instruct-Turbo](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen2.5-72b-instruct-turbo) * [Qwen/QwQ-32B](https://docs.aimlapi.com/integrations/broken-reference) * [mistralai/Mixtral-8x7B-Instruct-v0.1](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mixtral-8x7b-instruct-v0.1) * [mistralai/Mistral-7B-Instruct-v0.1](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-7b-instruct) * [mistralai/Mistral-7B-Instruct-v0.2](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-7b-instruct) * [mistralai/Mistral-7B-Instruct-v0.3](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-7b-instruct) * [mistralai/mistral-tiny](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-tiny) * [mistralai/mistral-nemo](https://docs.aimlapi.com/api-references/text-models-llm/mistral-ai/mistral-nemo) * [google/gemini-2.0-flash-exp](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash-exp) * [gemini-2.0-flash-exp](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash-exp) * [google/gemini-2.0-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.0-flash) * [x-ai/grok-3-beta](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-beta) * [x-ai/grok-3-mini-beta](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-mini-beta) * [anthracite-org/magnum-v4-72b](https://docs.aimlapi.com/api-references/text-models-llm/anthracite/magnum-v4) * [MiniMax-Text-01](https://docs.aimlapi.com/api-references/text-models-llm/minimax/text-01)
## Troubleshooting Possible Issues: * **403 status code (no body)** — This is the most common error. Possible causes: * You might need to use a different endpoint. Be sure to refer to the documentation for the specific model you've selected from our catalog! * The user may have run out of tokens or doesn’t have enough. Check your balance in your account dashboard. * **400 status code (no body)** — This error occurs when using models that are not compatible with the integration. See the previous section [Supported Models](#supported-models) :point\_up: --- # Source: https://docs.aimlapi.com/api-references/video-models/runway.md # Runway - [gen3a\_turbo](/api-references/video-models/runway/gen3a_turbo.md): Description of the gen3a\_turbo model: Pricing, API Reference, Examples. - [gen4\_turbo](/api-references/video-models/runway/gen4_turbo.md) - [gen4\_aleph](/api-references/video-models/runway/gen4_aleph.md) - [act\_two](/api-references/video-models/runway/act_two.md) --- # Source: https://docs.aimlapi.com/api-references/video-models/sber-ai.md # Sber AI - [Kandinsky 5 (Text-to-Video)](/api-references/video-models/sber-ai/kandinsky5-text-to-video.md) - [Kandinsky 5 Distill (Text-to-Video)](/api-references/video-models/sber-ai/kandinsky5-distill-text-to-video.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/bytedance/seed-1.8.md # Seed 1.8 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seed-1-8` {% endhint %} {% endcolumn %} {% column width="33.33333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A general-purpose agentic model optimized for efficient and accurate execution of complex tasks in real-world scenarios. {% hint style="success" %} [Create AI/ML API Key](https://aimlapi.com/app/keys) {% endhint %}
How to make the first API call **1️⃣ Required setup (don’t skip this)**\ ▪ **Create an account:** Sign up on the AI/ML API website (if you don’t have one yet).\ ▪ **Generate an API key:** In your account dashboard, create an API key and make sure it’s **enabled** in the UI. **2️ Copy the code example**\ At the bottom of this page, pick the snippet for your preferred programming language (Python / Node.js) and copy it into your project. **3️ Update the snippet for your use case**\ ▪ **Insert your API key:** replace `` with your real AI/ML API key.\ ▪ **Select a model:** set the `model` field to the model you want to call.\ ▪ **Provide input:** fill in the request input field(s) shown in the example (for example, `messages` for chat/LLM models, or other inputs for image/video/audio models). **4️ (Optional) Tune the request**\ Depending on the model type, you can add optional parameters to control the output (e.g., generation settings, quality, length, etc.). See the API schema below for the full list. **5️ Run your code**\ Run the updated code in your development environment. Response time depends on the model and request size, but simple requests typically return quickly. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seed-1-8"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."}},"required":["model","messages"],"title":"bytedance/seed-1-8"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"bytedance/seed-1-8", "messages":[ { "role":"user", "content":"Hi! What do you think about mankind?" # insert your prompt } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'bytedance/seed-1-8', messages:[ { role:'user', content: 'Hi! What do you think about mankind?' // insert your prompt here } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "choices": [ { "finish_reason": "stop", "index": 0, "logprobs": null, "message": { "content": "Hi there! That’s such a profound, layered question—mankind is equal parts breathtakingly brilliant and deeply flawed, a species still figuring itself out.\n\nOn one hand, we’re capable of extraordinary goodness and innovation: we’ve cured diseases that once wiped out entire populations, sent probes to the edges of our solar system, created art and music that transcends time and culture, and rallied by the millions to help strangers during disasters (think of the global outpouring after earthquakes or refugee crises). Our capacity for empathy, to care for someone we’ve never met, is one of our most beautiful traits.\n\nOn the other hand, we carry a heavy shadow: we’ve waged devastating wars, exploited each other and the planet for short-term gain, perpetuated systemic inequality, and pushed the natural world to the brink of crisis. These flaws aren’t inherent, but they’re a reminder of how easily we can prioritize self-interest over collective good.\n\nWhat feels most important to me, though, is that we’re a work in progress. We’ve learned from past mistakes—think of the abolition of slavery, the expansion of civil rights, the growing global movement for climate action—and we keep striving to do better. Mankind isn’t just one thing; it’s the teacher staying late to help a student, the scientist working tirelessly on a vaccine, the activist fighting for justice, and also the person making a small, kind choice each day. We have the potential to shape a more compassionate, sustainable future—we just have to keep choosing it, together.", "reasoning_content": "Got it, let's tackle this question: \"Hi! What do you think about mankind?\" First, I need to balance the complexity—mankind has both incredible highs and deep lows. Let's start with the positive aspects: creativity, innovation, compassion, resilience. Mention things like scientific breakthroughs (cures for diseases, space exploration), art, music, acts of kindness (helping strangers, disaster relief). Then, the challenging parts: conflict, environmental harm, inequality, cruelty. But important to frame it as a dynamic, evolving species. Maybe end on a note of hope, since we've shown the capacity to learn and improve. Let's make it heartfelt and nuanced, not too one-sided. Let's structure it:\n\n1. Greeting back, acknowledge it's a big question.\n2. The bright side: creativity, empathy, progress—examples like vaccines, renewable energy, community support.\n3. The shadow side: division, exploitation, ecological damage—examples like wars, climate crisis, inequality.\n4. The middle ground: we're a work in progress. Our capacity for growth and redemption is what stands out; we've corrected past wrongs (like civil rights movements) and keep striving for better.\n5. End with a hopeful note, emphasizing that our choices shape who we are, and there's potential for more good.\n\nLet me put that into natural language:\n\nHi there! That’s such a profound, layered question—mankind is equal parts breathtakingly brilliant and deeply flawed, a species still figuring itself out.\n\nOn one hand, we’re capable of extraordinary goodness and innovation: we’ve cured diseases that once wiped out entire populations, sent probes to the edges of our solar system, created art and music that transcends time and culture, and rallied by the millions to help strangers during disasters (think of the global outpouring after earthquakes or refugee crises). Our capacity for empathy, to care for someone we’ve never met, is one of our most beautiful traits.\n\nOn the other hand, we carry a heavy shadow: we’ve waged devastating wars, exploited each other and the planet for short-term gain, perpetuated systemic inequality, and pushed the natural world to the brink of crisis. These flaws aren’t inherent, but they’re a reminder of how easily we can prioritize self-interest over collective good.\n\nWhat feels most important to me, though, is that we’re a work in progress. We’ve learned from past mistakes—think of the abolition of slavery, the expansion of civil rights, the growing global movement for climate action—and we keep striving to do better. Mankind isn’t just one thing; it’s the teacher staying late to help a student, the scientist working tirelessly on a vaccine, the activist fighting for justice, and also the person making a small, kind choice each day. We have the potential to shape a more compassionate, sustainable future—we just have to keep choosing it, together.", "role": "assistant" } } ], "created": 1769655426, "id": "021769655407356f0a583d75381f7c0a662383f3247915e311aa6", "model": "seed-1-8-251228", "service_tier": "default", "object": "chat.completion", "usage": { "completion_tokens": 922, "prompt_tokens": 42, "total_tokens": 964, "prompt_tokens_details": { "cached_tokens": 0 }, "completion_tokens_details": { "reasoning_tokens": 606 } }, "meta": { "usage": { "credits_used": 3897 } } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/bytedance/seedance-1.0-lite-image-to-video.md # Seedance 1.0 lite (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedance-1-0-lite-i2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Generate professional video content from a reference image and text prompt in minutes — with the option to keep the camera fixed throughout the entire clip. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt. This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedance-1-0-lite-i2v"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["480p","720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"watermark":{"type":"boolean","default":false,"description":"Whether the video contains a watermark."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"camerafixed":{"type":"boolean","default":false,"description":"Whether to fix the camera position.\n- true: Fix the camera position. The platform will append instructions to fix the camera position in the user's prompt, but the actual effect is not guaranteed.\n- false: Do not fix the camera position."}},"required":["model","image_url","prompt"],"title":"bytedance/seedance-1-0-lite-i2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/bytedance/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "bytedance/seedance-1-0-lite-i2v", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/bytedance/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "bytedance/seedance-1-0-lite-i2v", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: "5", }); const url = new URL(`${baseUrl}/generate/video/bytedance/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/bytedance/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'cgt-20250704191750-n4qjp', 'status': 'queued'} Gen_ID: cgt-20250704191750-n4qjp Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'cgt-20250704191750-n4qjp', 'status': 'completed', 'video': {'url': 'https://ark-content-generation-ap-southeast-1.tos-ap-southeast-1.volces.com/seedance-1-0-lite-t2v/02175162787056300000000000000000000ffffc0a870115c506e.mp4?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=AKLTYjg3ZjNlOGM0YzQyNGE1MmI2MDFiOTM3Y2IwMTY3OTE%2F20250704%2Fap-southeast-1%2Ftos%2Frequest&X-Tos-Date=20250704T111816Z&X-Tos-Expires=86400&X-Tos-Signature=9fa7ce9b1230bdd6c9ed5e2f08bfeda232e48e81877ef1647d45b55b641e9f15&X-Tos-SignedHeaders=host'}} ``` {% endcode %}
**Processing time**: \~1.5 min. **Original**: [832x1120](https://drive.google.com/file/d/1d6vJ0AvhlWUBhPAF3YWDOUW_Ze6Ym4Iv/view?usp=sharing) **Low-res GIF preview**:

"Mona Lisa puts on glasses with her hands."

--- # Source: https://docs.aimlapi.com/api-references/video-models/bytedance/seedance-1.0-lite-text-to-video.md # Seedance 1.0 lite (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedance-1-0-lite-t2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Generate professional video content from text prompts in minutes — with the option to keep the camera fixed throughout the entire clip. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedance-1-0-lite-t2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["480p","720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"watermark":{"type":"boolean","default":false,"description":"Whether the video contains a watermark."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"camerafixed":{"type":"boolean","default":false,"description":"Whether to fix the camera position.\n- true: Fix the camera position. The platform will append instructions to fix the camera position in the user's prompt, but the actual effect is not guaranteed.\n- false: Do not fix the camera position."},"aspect_ratio":{"type":"string","enum":["16:9","4:3","1:1","3:4","9:16","21:9","9:21"],"default":"16:9","description":"The aspect ratio of the generated video."}},"required":["model","prompt"],"title":"bytedance/seedance-1-0-lite-t2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation may take around 40-50 seconds for a 5-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/generate/video/bytedance/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "bytedance/seedance-1-0-lite-t2v", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/generate/video/bytedance/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "bytedance/seedance-1-0-lite-t2v", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/bytedance/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/bytedance/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'cgt-20250704191750-n4qjp', 'status': 'queued'} Gen_ID: cgt-20250704191750-n4qjp Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'cgt-20250704191750-n4qjp', 'status': 'completed', 'video': {'url': 'https://ark-content-generation-ap-southeast-1.tos-ap-southeast-1.volces.com/seedance-1-0-lite-t2v/02175162787056300000000000000000000ffffc0a870115c506e.mp4?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=AKLTYjg3ZjNlOGM0YzQyNGE1MmI2MDFiOTM3Y2IwMTY3OTE%2F20250704%2Fap-southeast-1%2Ftos%2Frequest&X-Tos-Date=20250704T111816Z&X-Tos-Expires=86400&X-Tos-Signature=9fa7ce9b1230bdd6c9ed5e2f08bfeda232e48e81877ef1647d45b55b641e9f15&X-Tos-SignedHeaders=host'}} ``` {% endcode %}
**Processing time**: \~36 sec. **Original**: [1248x704](https://drive.google.com/file/d/1cNFk4MuQG7wuTi_wwQjrNDqGI6jgCKM5/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain, then
rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming."

--- # Source: https://docs.aimlapi.com/api-references/video-models/bytedance/seedance-1.0-pro-fast.md # Seedance 1.0 pro fast {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedance-1-0-pro-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt. \ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedance-1-0-pro-fast"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["480p","720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"watermark":{"type":"boolean","default":false,"description":"Whether the video contains a watermark."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"camerafixed":{"type":"boolean","default":false,"description":"Whether to fix the camera position.\n- true: Fix the camera position. The platform will append instructions to fix the camera position in the user's prompt, but the actual effect is not guaranteed.\n- false: Do not fix the camera position."},"last_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."}},"required":["model","prompt"],"title":"bytedance/seedance-1-0-pro-fast"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `complete`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "bytedance/seedance-1-0-pro-fast", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "bytedance/seedance-1-0-pro-fast", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'FGcTGqPuBac0Masr9DI8-', 'status': 'queued', 'meta': {'usage': {'credits_used': 2000000}}} Generation ID: FGcTGqPuBac0Masr9DI8- Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'FGcTGqPuBac0Masr9DI8-', 'status': 'succeeded', 'video': {'url': 'https://cdn.aimlapi.com/rat/seedance-1-0-pro-fast/02176939282693400000000000000000000ffffc0a88025d8e30a.mp4?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=AKLTYWJkZTExNjA1ZDUyNDc3YzhjNTM5OGIyNjBhNDcyOTQ%2F20260126%2Fap-southeast-1%2Ftos%2Frequest&X-Tos-Date=20260126T020054Z&X-Tos-Expires=86400&X-Tos-Signature=79dbb3fcd4b9a29728c096ef9da104d49273a8923da0ff6f3806e1ce64eda93c&X-Tos-SignedHeaders=host'}} ``` {% endcode %}
**Processing time**: \~ 34 sec. **Generated video** (1920x1088, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/bytedance/seedance-1.0-pro-image-to-video.md # Seedance 1.0 pro (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedance-1-0-pro-i2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Generate professional video content (1080p) from a reference image and text prompt in a minute — with the option to keep the camera fixed throughout the entire clip. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt. This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedance-1-0-pro-i2v"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["480p","720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"watermark":{"type":"boolean","default":false,"description":"Whether the video contains a watermark."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"camerafixed":{"type":"boolean","default":false,"description":"Whether to fix the camera position.\n- true: Fix the camera position. The platform will append instructions to fix the camera position in the user's prompt, but the actual effect is not guaranteed.\n- false: Do not fix the camera position."},"last_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."}},"required":["model","prompt"],"title":"bytedance/seedance-1-0-pro-i2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `generation_id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/bytedance/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "bytedance/seedance-1-0-pro-i2v", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/bytedance/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "bytedance/seedance-1-0-pro-i2v", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: "5", }); const url = new URL(`${baseUrl}/generate/video/bytedance/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/bytedance/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'cgt-20250721143112-kkm5n', 'status': 'queued'} Generation ID: cgt-20250721143112-kkm5n Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'cgt-20250721143112-kkm5n', 'status': 'completed', 'video': {'url': 'https://ark-content-generation-ap-southeast-1.tos-ap-southeast-1.volces.com/seedance-1-0-pro/02175307947305300000000000000000000ffffc0a840664c87c7.mp4?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=AKLTYjg3ZjNlOGM0YzQyNGE1MmI2MDFiOTM3Y2IwMTY3OTE%2F20250721%2Fap-southeast-1%2Ftos%2Frequest&X-Tos-Date=20250721T063202Z&X-Tos-Expires=86400&X-Tos-Signature=2d18ca9f23a2b12baac84d7cc9b56b4db77b6215f42a336aa170b666b3126324&X-Tos-SignedHeaders=host'}} ``` {% endcode %}
**Processing time**: \~56 sec (and \~38 sec for the `480p` resolution). **Original**: [1248x1664](https://drive.google.com/file/d/1tWaN8TFUUKjw-zNCboJG22HdnrbM-BAO/view?usp=sharing) **Low-res GIF preview**:

"Mona Lisa puts on glasses with her hands."

--- # Source: https://docs.aimlapi.com/api-references/video-models/bytedance/seedance-1.0-pro-text-to-video.md # Seedance 1.0 pro (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedance-1-0-pro-t2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Generate professional video content (720p) from text prompts in a minute — with the option to keep the camera fixed throughout the entire clip. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedance-1-0-pro-t2v"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["480p","720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"watermark":{"type":"boolean","default":false,"description":"Whether the video contains a watermark."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"camerafixed":{"type":"boolean","default":false,"description":"Whether to fix the camera position.\n- true: Fix the camera position. The platform will append instructions to fix the camera position in the user's prompt, but the actual effect is not guaranteed.\n- false: Do not fix the camera position."},"last_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."}},"required":["model","prompt"],"title":"bytedance/seedance-1-0-pro-t2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `generation_id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation may take around 40-50 seconds for a 5-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/generate/video/bytedance/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "bytedance/seedance-1-0-pro-t2v", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/generate/video/bytedance/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "bytedance/seedance-1-0-pro-t2v", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/bytedance/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/bytedance/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: cgt-20250718224736-699sq Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'cgt-20250718224736-699sq', 'status': 'completed', 'video': {'url': 'https://ark-content-generation-ap-southeast-1.tos-ap-southeast-1.volces.com/seedance-1-0-pro/02175285005664100000000000000000000ffffc0a84066848df2.mp4?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=AKLTYjg3ZjNlOGM0YzQyNGE1MmI2MDFiOTM3Y2IwMTY3OTE%2F20250718%2Fap-southeast-1%2Ftos%2Frequest&X-Tos-Date=20250718T144824Z&X-Tos-Expires=86400&X-Tos-Signature=603f16de387207c0756812148a6f6de48dc574226356d607666ec53cc98e229c&X-Tos-SignedHeaders=host'}} ``` {% endcode %}
**Processing time**: \~1 min. **Original**: [1920x1088](https://drive.google.com/file/d/1yEBJGIknxeWrzUwWEA30GNLhf1Xizh-h/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain,
then rushes toward the camera with its jaws open, revealing massive fangs.
We see it's coming."

--- # Source: https://docs.aimlapi.com/api-references/image-models/bytedance/seededit-3.0-image-to-image.md # Seededit 3.0 (Image-to-Image) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seededit-3.0-i2i` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This model can process and generate 4K images, editing selected areas naturally and precisely while faithfully preserving the visual fidelity of non-edited areas. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seededit-3.0-i2i"]},"image":{"type":"string","description":"The image to be edited. Enter the Base64 encoding of the picture or an accessible URL. Image URL: Make sure that the image URL is accessible. Base64-encoded content: The format must be in lowercase."},"size":{"type":"string","enum":["adaptive"],"default":"adaptive","description":"The model checks the size of the input picture against its internal size table and picks the closest match as the output picture size."},"prompt":{"type":"string","description":"The text prompt describing the content, style, or composition of the image to be generated."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."},"seed":{"type":"integer","description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":1,"maximum":10,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","image","prompt"],"title":"bytedance/seededit-3.0-i2i"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "bytedance/seededit-3.0-i2i", "image": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "prompt": "Add a bird to the foreground of the photo.", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'bytedance/seededit-3.0-i2i', prompt: 'Add a bird to the foreground of the photo.', image: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png', }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "created": 1754408583, "data": [ { "url": "https://ark-content-generation-v2-ap-southeast-1.tos-ap-southeast-1.volces.com/seededit-3-0-i2i/0217544085757151f54867e2807e9e62dfa0a3e2d06531a7ce49c.jpeg?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=AKLTYWJkZTExNjA1ZDUyNDc3YzhjNTM5OGIyNjBhNDcyOTQ%2F20250805%2Fap-southeast-1%2Ftos%2Frequest&X-Tos-Date=20250805T154303Z&X-Tos-Expires=86400&X-Tos-Signature=e37babdb426ccd6e36f96a019145af3ea8a6e5cb21f3761d8aa3eae32b24d738&X-Tos-SignedHeaders=host" } ] } ``` {% endcode %}
Reference ImageGenerated Image

(original)

"Add a bird to the foreground of the photo."

More generated images |

"Add a crown to the T-rex's head."

|

"Add a couple of silver wings"

| | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | |

"Remove the dinosaur. Place a book and a bouquet of wildflowers in blue and pink tones on the lounge chair. Let a light foamy surf gently wash the bottom of the chair. Don't change anything else."

|

"Make the dinosaur sit on a lounge chair with its back to the camera, looking toward the water. The setting sun has almost disappeared below the horizon."

|
--- # Source: https://docs.aimlapi.com/api-references/image-models/bytedance/seedream-3.0.md # Seedream 3.0 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedream-3.0` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This bilingual (Chinese-English) image generation model supports arbitrary image dimensions — as long as the product of width and height remains within a generous limit (up to 2K). It offers faster response times, improved rendering of small text and layouts, stronger visual aesthetics and structural consistency, and high fidelity in fine details. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedream-3.0"]},"prompt":{"type":"string","description":"The text prompt describing the content, style, or composition of the image to be generated."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."},"size":{"type":"string","description":"Specifies the dimensions (width x height in pixels) of the generated image. Must be between [512x512, 2048x2048]."},"seed":{"type":"integer","description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"guidance_scale":{"type":"number","minimum":1,"maximum":10,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"bytedance/seedream-3.0"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "bytedance/seedream-3.0", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "aspect_ratio": "16:9", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'bytedance/seedream-3.0', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', aspect_ratio: '16:9', }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation: {'created': 1751616711, 'data': [{'url': 'https://ark-content-generation-v2-ap-southeast-1.tos-ap-southeast-1.volces.com/seedream-3-0-t2i/02175161671039622600af416b2ca58c9c8a1e1bf93fac0335693.jpeg?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=AKLTYjg3ZjNlOGM0YzQyNGE1MmI2MDFiOTM3Y2IwMTY3OTE%2F20250704%2Fap-southeast-1%2Ftos%2Frequest&X-Tos-Date=20250704T081151Z&X-Tos-Expires=86400&X-Tos-Signature=76ad6d2e0eb218521b9ce0bfdc98eaf2aa683e9a0d3840624fb4d413a1fd360e&X-Tos-SignedHeaders=host&x-tos-process=image%2Fwatermark%2Cimage_YXNzZXRzL3dhdGVybWFyay5wbmc_eC10b3MtcHJvY2Vzcz1pbWFnZS9yZXNpemUsUF81'}]} ``` {% endcode %}
We obtained the following 1888x301 image by running this code example:

'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.'

Here’s an example of a 555×543 image generated using the same prompt:

'A T-Rex relaxing on a beach, lying on a sun lounger
and wearing sunglasses.'

--- # Source: https://docs.aimlapi.com/api-references/image-models/bytedance/seedream-4-5.md # Seedream 4.5 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedream-4-5` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model combines both text-to-image and image-to-image capabilities. Compared to [Seedream 4.0](https://docs.aimlapi.com/api-references/image-models/bytedance/seedream-v4-edit-image-to-image), this model significantly improves editing consistency (preserving subject details, lighting, and color tone), the quality of portraits and small text. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedream-4-5"]},"prompt":{"type":"string","description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":14,"description":"List of URLs or local Base64 encoded images to edit."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":1440,"maximum":4096,"default":2048},"height":{"type":"integer","minimum":1440,"maximum":4096,"default":2048}}},{"type":"string","enum":["2K","4K"]}],"description":"The size of the generated image."},"response_format":{"type":"string","enum":["url","b64_json"],"default":"url","description":"The format in which the generated images are returned."},"seed":{"type":"integer","description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"bytedance/seedream-4-5"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate a new image using the one from [the flux/dev Quick Example](https://docs.aimlapi.com/api-references/flux/flux-dev#quick-example) as a reference — and make a simple change to it with a prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "bytedance/seedream-4-5", "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'bytedance/seedream-4-5', prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', image_urls: [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ] }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/bison/seedream-4-5/02176522545211712014becacfaf8a37949eaec3c8139753caaae_0.jpeg?X-Tos-Algorithm=TOS4-HMAC-SHA256&X-Tos-Credential=AKLTYWJkZTExNjA1ZDUyNDc3YzhjNTM5OGIyNjBhNDcyOTQ%2F20251208%2Fap-southeast-1%2Ftos%2Frequest&X-Tos-Date=20251208T202437Z&X-Tos-Expires=86400&X-Tos-Signature=9be05c928d39a74abc5c06bff5759a28449256d72f1d062933479f279bc672cb&X-Tos-SignedHeaders=host", "size": "2048x2048" } ], "meta": { "usage": { "credits_used": 84000 } } } ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

--- # Source: https://docs.aimlapi.com/api-references/image-models/bytedance/seedream-v4-edit-image-to-image.md # Seedream 4.0 Edit (Image-to-image) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedream-v4-edit` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model supports background replacement, object editing, style and color adjustments, lighting and texture enhancements, and artistic filters, while ensuring character consistency and allowing iterative refinement. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedream-v4-edit"]},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":10,"description":"List of URLs or local Base64 encoded images to edit."},"image_size":{"anyOf":[{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"]},{"type":"object","properties":{"width":{"type":"number","minimum":1024,"maximum":4096},"height":{"type":"number","minimum":1024,"maximum":4096}},"required":["width","height"]}],"default":"square_hd","description":"The size of the generated image."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"sync_mode":{"type":"boolean","default":false,"description":"If set to true, the function will wait for the image to be generated and uploaded before returning the response. This will increase the latency of the function but it allows you to get the image directly in the response without going through the CDN."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."}},"required":["model","image_urls","prompt"],"title":"bytedance/seedream-v4-edit"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json", }, json={ "model":"bytedance/seedream-v4-edit", "prompt": "Add a bird to the foreground of the photo.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png" ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'bytedance/seedream-v4-edit', prompt: 'Add a bird to the foreground of the photo.', image_urls: [ 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png' ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://v3b.fal.media/files/b/kangaroo/nHWHZmeMr-SL4R7uFtvq7_d54b8823cf784f67bcfa43993bdb2179.png", "content_type": "image/png", "file_name": "d54b8823cf784f67bcfa43993bdb2179.png", "file_size": 1033237, "width": null, "height": null } ], "seed": 623004765, "data": [ { "url": "https://v3b.fal.media/files/b/kangaroo/nHWHZmeMr-SL4R7uFtvq7_d54b8823cf784f67bcfa43993bdb2179.png", "content_type": "image/png", "file_name": "d54b8823cf784f67bcfa43993bdb2179.png", "file_size": 1033237, "width": null, "height": null } ], "meta": { "usage": { "tokens_used": 63000 } } } ``` {% endcode %}
Reference ImageGenerated Image

(original)

"Add a bird to the foreground of the photo."

--- # Source: https://docs.aimlapi.com/api-references/image-models/bytedance/seedream-v4-text-to-image.md # Seedream 4.0 (Text-to-Image) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/seedream-v4-text-to-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Ultra-fast, consistent in character rendering, and matching [Gemini 2.5 Flash Image (Nano Banana)](https://docs.aimlapi.com/api-references/image-models/google/gemini-2.5-flash-image) in quality. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/seedream-v4-text-to-image"]},"image_size":{"anyOf":[{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"]},{"type":"object","properties":{"width":{"type":"number","minimum":1024,"maximum":4096},"height":{"type":"number","minimum":1024,"maximum":4096}},"required":["width","height"]}],"default":"square_hd","description":"The size of the generated image."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"sync_mode":{"type":"boolean","default":false,"description":"If set to true, the function will wait for the image to be generated and uploaded before returning the response. This will increase the latency of the function but it allows you to get the image directly in the response without going through the CDN."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."}},"required":["model","prompt"],"title":"bytedance/seedream-v4-text-to-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "bytedance/seedream-v4-text-to-image", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 4096, "height": 4096 }, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'bytedance/seedream-3.0', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 4096, height: 4096 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://v3b.fal.media/files/b/lion/O0byJzpkMBsjWFUMSRelX_ae55fef23aa54a1cad92c3abdf8f5337.png", "content_type": "image/png", "file_name": "ae55fef23aa54a1cad92c3abdf8f5337.png", "file_size": 3282232, "width": null, "height": null } ], "seed": 1367947822, "data": [ { "url": "https://v3b.fal.media/files/b/lion/O0byJzpkMBsjWFUMSRelX_ae55fef23aa54a1cad92c3abdf8f5337.png", "content_type": "image/png", "file_name": "ae55fef23aa54a1cad92c3abdf8f5337.png", "file_size": 3282232, "width": null, "height": null } ], "meta": { "usage": { "tokens_used": 63000 } } } ``` {% endcode %}
We obtained the following 4096x4096 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/service-endpoints.md # Service Endpoints - [Account Balance](/api-references/service-endpoints/account-balance.md) - [Complete Model List](/api-references/service-endpoints/complete-model-list.md) --- # Source: https://docs.aimlapi.com/quickstart/setting-up.md # Quickstart Here, you'll learn how to start using our API in your code. The following steps must be completed regardless of whether you integrate one of the [models](https://docs.aimlapi.com/api-references/model-database) we offer or use our ready-made solution: * [generating an AIML API Key](#generating-an-aiml-api-key), * [configuring the base URL](#configuring-base-url), * [making an API call](#making-an-api-call). Let's walk through an example of connecting to the [**gpt-4o**](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) model via OpenAI SDK. This guide is suitable even for complete beginners. ## G**enerating an AIML API Key**
What is an API Key? You can find your AIML API key on the [account page](https://aimlapi.com/app/keys). An AIML API Key is a credential that grants you access to our API from within your code. It is a sensitive string of characters that should be kept confidential. Do not share this API key with anyone else, as it could be misused without your knowledge. ⚠️ Note that API keys from third-party organizations cannot be used with our API: you need an AIML API Key.
To use the AIML API, you need to create an account and generate an API key. Follow these steps: 1. [**Create an Account**](https://aimlapi.com/app/sign-up)**:** Visit the AI/ML API website and create an account. 2. [**Generate an API Key**](https://aimlapi.com/app/keys)**:** After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI.

Your API key

## **Configuring Base URL**
What is a Base URL? The **Base URL** is the first part of the URL (including the protocol, domain, and pathname) that determines the server responsible for handling your request. It’s crucial to configure the correct Base URL in your application, especially if you are using SDKs from OpenAI, Azure, or other providers. By default, these SDKs are set to point to their servers, which are not compatible with our API keys and do not support many of the models we offer.
Depending on your environment and application, you will set the base URL differently. Below is a universal string that you can use to access our API. Copy it or return here later when you are ready with your environment or app. ``` https://api.aimlapi.com ``` The AI/ML API supports both versioned and non-versioned URLs, providing flexibility in your API requests. You can use either of the following formats: * * {% hint style="success" %} Using versioned URLs can help ensure compatibility with future updates and changes to the API. It is recommended to use versioned URLs for long-term projects to maintain stability. {% endhint %} ## Making an API Call Based on your environment, you will call our API differently. Below are two common ways to call our API using two popular programming languages: **Python** and **NodeJS**. {% hint style="info" %} In the examples below, we use the [**OpenAI SDK**](https://docs.aimlapi.com/supported-sdks#openai). This is possible due to our compatibility with most OpenAI APIs, but this is just one approach. You can use our API without this SDK with raw HTTP queries. {% endhint %} If you don’t want lengthy explanations, here’s the code you can use right away in a Python or Node.js program. You only need to replace `` with your AIML API Key obtained from your account.\ However, below, we will still go through these examples step by step in both languages explaining every single line. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from openai import OpenAI base_url = "https://api.aimlapi.com/v1" # Insert your AIML API key in the quotation marks instead of : api_key = "" system_prompt = "You are a travel agent. Be descriptive and helpful." user_prompt = "Tell me about San Francisco" api = OpenAI(api_key=api_key, base_url=base_url) def main(): completion = api.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}, ], temperature=0.7, max_tokens=256, ) response = completion.choices[0].message.content print("User:", user_prompt) print("AI:", response) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="NodeJS" %} {% code overflow="wrap" %} ```javascript #!/usr/bin/env node const OpenAI = require("openai"); const baseURL = "https://api.aimlapi.com/v1"; const apiKey = "PASTE YOUR API KEY HERE"; const systemPrompt = "You are a travel agent. Be descriptive and helpful."; const userPrompt = "Tell me about San Francisco"; const api = new OpenAI({ apiKey, baseURL, }); const main = async () => { try { const completion = await api.chat.completions.create({ model: "google/gemma-3-27b-it", messages: [ { role: "system", content: systemPrompt, }, { role: "user", content: userPrompt, }, ], temperature: 0.7, }); const response = completion.choices[0].message.content; console.log("User:", userPrompt); console.log("AI:", response); } catch (error) { console.error("Error:", error.message); } }; main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Step-by-step example in Python Let's start from very beginning. We assume you already installed Python (with venv), if not, here a [guide for the beginners](https://docs.aimlapi.com/faq/can-i-use-api-in-python). Create a new folder for test project, name it as `aimlapi-welcome` and change to it. ```bash mkdir ./aimlapi-welcome cd ./aimlapi-welcome ``` (Optional) If you use IDE then we recommend to open created folder as workspace. On example, in VSCode you can do it with: ``` code . ``` Run a terminal inside created folder and create virtual envorinment with a command ```shell python3 -m venv ./.venv ``` Activate created virtual environment ```shell # Linux / Mac source ./.venv/bin/activate # Windows ./.venv/bin/Activate.bat ``` Install requirement dependencies. In our case we need only OpenAI SDK ```shell pip install openai ``` Create new file and name it as `travel.py` ```shell touch travel.py ``` Paste following content inside this `travel.py` and replace `` with your API key you got on [first step](#generating-an-api-key). ```python from openai import OpenAI base_url = "https://api.aimlapi.com/v1" api_key = "" system_prompt = "You are a travel agent. Be descriptive and helpful." user_prompt = "Tell me about San Francisco" api = OpenAI(api_key=api_key, base_url=base_url) def main(): completion = api.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}, ], temperature=0.7, max_tokens=256, ) response = completion.choices[0].message.content print("User:", user_prompt) print("AI:", response) if __name__ == "__main__": main() ``` Run the application ```shell python3 ./travel.py ``` If you done all correct, you will see following output: {% code overflow="wrap" %} ```json5 User: Tell me about San Francisco AI: San Francisco, located in northern California, USA, is a vibrant and culturally rich city known for its iconic landmarks, beautiful vistas, and diverse neighborhoods. It's a popular tourist destination famous for its iconic Golden Gate Bridge, which spans the entrance to the San Francisco Bay, and the iconic Alcatraz Island, home to the infamous federal prison. The city's famous hills offer stunning views of the bay and the cityscape. Lombard Street, the "crookedest street in the world," is a must-see attraction, with its zigzagging pavement and colorful gardens. Ferry Building Marketplace is a great place to explore local food and artisanal products, and the Pier 39 area is home to sea lions, shops, and restaurants. San Francisco's diverse neighborhoods each have their unique character. The historic Chinatown is the oldest in North America, while the colorful streets of the Mission District are known for their murals and Latin American culture. The Castro District is famous for its LGBTQ+ community and vibrant nightlife. ``` {% endcode %}
Step-by-step example in NodeJS We assume you already have Node.js installed. If not, here is a [guide for beginners](https://docs.aimlapi.com/faq/can-i-use-api-in-nodejs). Create a new folder for the example project ```bash mkdir ./aimlapi-welcome cd ./aimlapi-welcome ``` Create a project file ```bash npm init -y ``` Install the required dependencies ```bash npm i openai ``` Create a file with the source code ```bash touch ./index.js ``` And paste the following content to the file and save it ```javascript #!/usr/bin/env node const OpenAI = require("openai"); const baseURL = "https://api.aimlapi.com/v1"; const apiKey = "PASTE YOUR API KEY HERE"; const systemPrompt = "You are a travel agent. Be descriptive and helpful."; const userPrompt = "Tell me about San Francisco"; const api = new OpenAI({ apiKey, baseURL, }); const main = async () => { try { const completion = await api.chat.completions.create({ model: "google/gemma-3-27b-it", messages: [ { role: "system", content: systemPrompt, }, { role: "user", content: userPrompt, }, ], temperature: 0.7, }); const response = completion.choices[0].message.content; console.log("User:", userPrompt); console.log("AI:", response); } catch (error) { console.error("Error:", error.message); } }; main(); ``` Run the file ```bash ./index.js ``` You will see a response that looks like this {% code overflow="wrap" %} ```json5 User: Tell me about San Francisco AI: San Francisco, located in the northern part of California, USA, is a vibrant and culturally rich city known for its iconic landmarks, beautiful scenery, and diverse neighborhoods. The city is famous for its iconic Golden Gate Bridge, an engineering marvel and one of the most recognized structures in the world. Spanning the Golden Gate Strait, this red-orange suspension bridge connects San Francisco to Marin County and offers breathtaking views of the San Francisco Bay and the Pacific Ocean. ``` {% endcode %}
## Code Explanation Both examples are written in different programming languages, but despite that, they look very similar. Let's break down the code step by step and see what's going on. In the examples above, we are using the OpenAI SDK. The OpenAI SDK is a nice module that allows us to use the AI/ML API without dealing with repetitive boilerplate code for handling HTTP requests. Before we can use the OpenAI SDK, it needs to be imported. The import happens in the following places: {% tabs %} {% tab title="JavaScript" %} ```javascript const { OpenAI } = require("openai"); ``` {% endtab %} {% tab title="Python" %} ```python from openai import OpenAI ``` {% endtab %} {% endtabs %} Simple as it is. The next step is to initialize variables that our code will use. The two main ones are: the base URL and the API key. We already discussed them at the beginning of the article. {% tabs %} {% tab title="JavaScript" %} ```javascript const baseURL = "https://api.aimlapi.com/v1"; const apiKey = ""; const systemPrompt = "You are a travel agent. Be descriptive and helpful"; const userPrompt = "Tell me about San Francisco"; ``` {% endtab %} {% tab title="Python" %} ```python base_url = "https://api.aimlapi.com/v1" api_key = "" system_prompt = "You are a travel agent. Be descriptive and helpful." user_prompt = "Tell me about San Francisco" ``` {% endtab %} {% endtabs %} To communicate with LLM models, users use texts. These texts are usually called "Prompts." Inside our code, we have prompts with two roles: the system and the user. The system prompt is designed to be the main source of instruction for LLM generation, while the user prompt is designed to be user input, the subject of the system prompt. Despite that many models can operate differently, this behavior usually applies to chat LLM models, currently one of the most useful and popular ones. Inside the code, the prompts are called in variables `systemPrompt`, `userPrompt` in JS, and `system_prompt`, `user_prompt` in Python. Before we use the API, we need to create an instance of the OpenAI SDK class. It allows us to use all their methods. The instance is created with our imported package, and here we forward two main parameters: the base URL and the API key. {% tabs %} {% tab title="JavaScript" %} ```javascript const api = new OpenAI({ apiKey, baseURL, }); ``` {% endtab %} {% tab title="Python" %} ```python api = OpenAI(api_key=api_key, base_url=base_url) ``` {% endtab %} {% endtabs %} Because of notation, these two parameters are called slightly differently in these different languages (camel case in JS and snake case in Python), but their functionality is the same. All preparation steps are done. Now we need to write our functionality and create something great. In the examples above, we make the simplest travel agent. Let's break down the steps of how we send a request to the model. The best practice is to split the code blocks into complete parts with their own logic and not place executable code inside global module code. This rule applies in both languages we discuss. So we create a main function with all our logic. In JS, this function needs to be async, due to Promises and simplicity. In Python, requests run synchronously. The OpenAI SDK provides us with methods to communicate with chat models. It is placed inside the `chat.completions.create` function. This function accepts multiple parameters but requires only two: `model` and `messages`. `model` is a string, the name of the model that you want to use. For the best results, use a model designed for chat, or you can get unpredictable results if the model is not fine-tuned for that purpose. A list of supported models can be found here. `messages` is an array of objects with a `content` field as prompt and a `role` string that can be one of `system`, `user`, `tool`, `assistant`. With the role, the model can understand what to do with this prompt: Is this an instruction? Is this a user message? Is this an example of how to answer? Is this the result of code execution? The tool role is used for more complex behavior and will be discussed in another article. In our example, we also use `max_tokens` and `temperature`. With that knowledge, we can now send our request like the following: {% tabs %} {% tab title="JavaScript" %} ```javascript const completion = await api.chat.completions.create({ model: "gpt-4o", messages: [ { role: "system", content: systemPrompt, }, { role: "user", content: userPrompt, }, ], temperature: 0.7, max_tokens: 256, }); ``` {% endtab %} {% tab title="Python" %} ```python completion = api.chat.completions.create( model="gpt-4o", messages=[ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_prompt}, ], temperature=0.7, max_tokens=256, ) ``` {% endtab %} {% endtabs %} The response from the function `chat.completions.create` contains a [completion](https://docs.aimlapi.com/capabilities/completion-or-chat-models). Completion is a fundamental part of LLM models' logic. Every LLM model is some sort of word autocomplete engine, trained by huge amounts of data. The chat models are designed to autocomplete large chunks of messages with prompts and certain roles, but other models can have their own custom logic without even roles. Inside this completion, we are interested in the text of the generation. We can get it by getting the result from the completion variable: {% tabs %} {% tab title="JavaScript" %} ```javascript const response = completion.choices[0].message.content; ``` {% endtab %} {% tab title="Python" %} ```python response = completion.choices[0].message.content ``` {% endtab %} {% endtabs %} In certain cases, completion can have multiple results. These results are called choices. Every choice has a message, the product of generation. The string content is placed inside the `content` variable, which we placed inside our response variable above. In the next steps, we can finally see the results. In both examples, we print the user prompt and response like it was a conversation: {% tabs %} {% tab title="JavaScript" %} ```javascript console.log("User:", userPrompt); console.log("AI:", response); ``` {% endtab %} {% tab title="Python" %} ```python print("User:", user_prompt) print("AI:", response) ``` {% endtab %} {% endtabs %} Voila! Using AI/ML API models is the simplest and most productive way to get into the world of Machine Learning and Artificial Intelligence. ## Future Steps * [Know more about OpenAI SDK inside AI/ML API](https://docs.aimlapi.com/quickstart/supported-sdks) --- # Source: https://docs.aimlapi.com/api-references/image-models/topaz-labs/sharpen-generative.md # Sharpen Generative {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `topaz-labs/sharpen-gen` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A next-level sharpening model powered by generative AI, capable of recovering missing details during the refocusing/resharpening process. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["topaz-labs/sharpen-gen"]},"mode":{"type":"string","enum":["Super Focus","Super Focus V2"]},"image_url":{"type":"string","format":"uri","description":"The URL of the reference image."},"output_format":{"type":"string","enum":["jpeg","jpg","png","tiff","tif"],"default":"jpeg","description":"The format of the generated image."},"subject_detection":{"type":"string","enum":["All","Foreground","Background"],"default":"All","description":"Specifies which subjects to detect and process. Options: 'All' (detect all subjects), 'Foreground' (detect only foreground subjects), 'Background' (detect background subjects)."},"face_enhancement":{"type":"boolean","default":true,"description":"Whether to enhance faces in the image. When true, the model applies face-specific improvements."},"face_enhancement_creativity":{"type":"number","minimum":0,"maximum":1,"default":0,"description":"Level of creativity for face enhancement (0-1). Higher values allow more creative, less conservative changes."},"face_enhancement_strength":{"type":"number","minimum":0,"maximum":1,"default":0.8,"description":"How sharp enhanced faces are relative to background (0-1). Lower values blend changes subtly; higher values make faces more pronounced."},"strength":{"type":"number","minimum":0,"maximum":1,"description":"Defines the overall intensity of the sharpening effect. Increases details. Too much sharpening can create an unrealistic result."},"focus_boost":{"type":"number","minimum":0.25,"maximum":1,"description":"Corrects images that are missing detail by downscaling your image then upscaling the results back to the original size. Use on very blurry images!"},"seed":{"type":"integer","description":"Optional fixed seed for repeatable results."}},"required":["model","mode","image_url"],"title":"topaz-labs/sharpen-gen"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's sharpen a relatively strongly blurred image using the *Strong* mode while adjusting the *strength* parameter. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "topaz-labs/sharpen", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blurred-landscape.png", "mode": "Super Focus V2", "strength": 0.6, "output_format": "jpeg", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'topaz-labs/sharpen', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blurred-landscape.png', mode: 'Super Focus V2', strength: 0.6, output_format: 'jpeg', }), }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/komodo/output/6435616/5cff080e-5d24-4fc3-85f5-0e57621ead7d.jpeg?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Checksum-Mode=ENABLED&X-Amz-Credential=ccc352dcd71a436e5fd697125a1be9f8%2F20251027%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20251027T202819Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=d6d1d9c641c33bde33b14090d579d490d30f75e82283764705acd28b18765a70" } ], "meta": { "usage": { "tokens_used": 210000 } } } ``` {% endcode %}
Blurred ImageDeblurred Image

"mode": "Super Focus V2"
"strength": 0.6

--- # Source: https://docs.aimlapi.com/api-references/image-models/topaz-labs/sharpen.md # Sharpen {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `topaz-labs/sharpen` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model produces sharper visuals, eliminating blur and improving clarity across the subject or the entire frame. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["topaz-labs/sharpen"]},"mode":{"type":"string","enum":["Standard","Strong","Lens Blur","Lens Blur V2","Motion Blur","Natural","Refocus"]},"image_url":{"type":"string","format":"uri","description":"The URL of the reference image."},"output_format":{"type":"string","enum":["jpeg","jpg","png","tiff","tif"],"default":"jpeg","description":"The format of the generated image."},"subject_detection":{"type":"string","enum":["All","Foreground","Background"],"default":"All","description":"Specifies which subjects to detect and process. Options: 'All' (detect all subjects), 'Foreground' (detect only foreground subjects), 'Background' (detect background subjects)."},"face_enhancement":{"type":"boolean","default":true,"description":"Whether to enhance faces in the image. When true, the model applies face-specific improvements."},"face_enhancement_creativity":{"type":"number","minimum":0,"maximum":1,"default":0,"description":"Level of creativity for face enhancement (0-1). Higher values allow more creative, less conservative changes."},"face_enhancement_strength":{"type":"number","minimum":0,"maximum":1,"default":0.8,"description":"How sharp enhanced faces are relative to background (0-1). Lower values blend changes subtly; higher values make faces more pronounced."},"strength":{"type":"number","minimum":0.01,"maximum":1,"description":"Defines the overall intensity of the sharpening effect. Increases details. Too much sharpening can create an unrealistic result."},"minor_denoise":{"type":"number","minimum":0.01,"maximum":1,"description":"Removes noisy pixels to increase clarity. Can slightly increase image sharpness."}},"required":["model","mode","image_url"],"title":"topaz-labs/sharpen"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's sharpen a relatively strongly blurred image using the `Strong` mode while adjusting the `strength` parameter. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "topaz-labs/sharpen", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blurred-landscape.png", "mode": "Strong", "strength": 0.9, "minor_denoise": 0.9, "output_format": "png", } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'topaz-labs/sharpen', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blurred-landscape.png', mode: 'Strong', strength: 0.9, minor_denoise: 0.9, output_format: 'png', }), }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/komodo/output/6435616/ddb723c4-ed16-42f4-8818-9ca4de176ea7.png?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Checksum-Mode=ENABLED&X-Amz-Credential=ccc352dcd71a436e5fd697125a1be9f8%2F20251027%2Fauto%2Fs3%2Faws4_request&X-Amz-Date=20251027T162246Z&X-Amz-Expires=604800&X-Amz-SignedHeaders=host&x-id=GetObject&X-Amz-Signature=4f4c449772b258bcf53e7257444698e2e486832e77ab5835728afc4aabfa0f8c" } ], "meta": { "usage": { "tokens_used": 210000 } } } ``` {% endcode %}
Blurred ImageDeblurred Image

"mode": "Strong"
"strength": 0.9

For clarity, we’ve created a split image showing the results of different parameter settings.
--- # Source: https://docs.aimlapi.com/integrations/sillytavern.md # SillyTavern ## About [SillyTavern](https://github.com/SillyTavern/SillyTavern) is a locally installed user interface that allows you to interact with text generation LLMs, image generation engines, and TTS voice models. Integration with the AI/ML API currently applies only to LLMs. *** ## Installing SillyTavern (Fresh Setup) {% hint style="info" %} Adapted from the official SillyTavern [README / Installation](https://github.com/SillyTavern/SillyTavern?tab=readme-ov-file#-installation) {% endhint %} ### Windows (Recommended: SillyTavern Launcher) {% hint style="warning" %} **Warning:** * Do **not** install into any Windows‑controlled folder (Program Files, System32, etc.). * Do **not** run `Start.bat` with administrator permissions. * Windows 7 is **not** supported (requires Node.js 18.16+). {% endhint %} 1. Make sure **Node.js** (latest LTS) and **Git** are installed. 2. Open **Run** (`Win + R`) and execute: ```bash cmd /c winget install -e --id Git.Git ``` 3. In **File Explorer**, navigate to a non‑system folder (e.g. `C:\SillyTavern`), click the address bar, type `cmd` and press Enter. 4. Clone the release branch and launch SillyTavern: {% code overflow="wrap" %} ```bash git clone https://github.com/SillyTavern/SillyTavern -b release cd SillyTavern start Start.bat ``` {% endcode %} 5. After the installer finishes, a browser window will open with the SillyTavern interface. *** ### Linux / macOS 1. Install **Git** and **Node.js** (via your distro’s package manager or Homebrew). 2. In a terminal, run: ```bash # Clone the release branch git clone https://github.com/SillyTavern/SillyTavern -b release cd SillyTavern ``` 3. Make the startup script executable and run it: ```bash chmod +x start.sh ./start.sh ``` 4. Open your browser to the URL shown in the console (default: `http://localhost:8000`). {% hint style="success" %} For Docker, Termux, GitHub Desktop, and other installation methods, see the full [Installation section](https://github.com/SillyTavern/SillyTavern?tab=readme-ov-file#-installation) in the upstream README. {% endhint %} *** ## Connecting AI/ML API in SillyTavern ### Step 1. Launch SillyTavern → Set Persona * On first launch you'll see "Welcome to SillyTavern" * Enter `AI/ML API` as the **Persona Name** for example * Click **Save** > This step is required to unlock the chat UI.
### Step 2. Go to Connection Settings * Open ⚙ **Settings** tab → **Connection Profile** (Second tab) * Configure: * `API`: Chat Completion * `Chat Completion Source`: AI/ML API
### Step 3. Enter API Key 1. Copy your API key from [https://aimlapi.com/app/keys](https://aimlapi.com/app/keys?utm_source=sillytavern\&utm_medium=github\&utm_campaign=integration) 2. Paste into the **AI/ML API Key** field. 3. Click 🔑 icon to save — it should show a timestamp.
### Step 4. Choose a model Click the dropdown next to **AI/ML Model** and pick any model such as: * `gpt-4o-mini-2024-07-18` * `claude-3-5-sonnet` * `gemini-1.5-flash`
### Step 5. Test Connection Click **Connect** and then the **Test Message**. * You should see `API connection successful`. * 🟢 Status: `Valid`.
### 💬 Step 6. Send a Message Use the input box below to send a test message:
If all is set up, you’ll see the assistant reply like this:
*** ### 🎉 Step 7. Done – You’re All Set! You’re now connected to AI/ML API and can start chatting with any of 200+ models. {% hint style="success" %} Tip: Try Claude 3.5, GPT-4o, Gemini 1.5 or explore more in [Model Playground](https://aimlapi.com/app?utm_source=sillytavern\&utm_medium=github\&utm_campaign=integration) {% endhint %} *** ## ✅ Config checklist | Field | Value | | ------- | --------------------------- | | API | Chat Completion | | Source | AI/ML API | | API Key | `********` (saved) | | Model | `gpt-4o-mini-2024-07-18` | | Status | ✅ API connection successful | *** ## 🔗 Internal Links * [AI/ML API Model Catalog](https://aimlapi.com/models?utm_source=sillytavern\&utm_medium=github\&utm_campaign=integration) * [Your API Keys Page](https://aimlapi.com/app/keys?utm_source=sillytavern\&utm_medium=github\&utm_campaign=integration) * [Community & Feedback](https://aimlapi.com/community?utm_source=sillytavern\&utm_medium=github\&utm_campaign=integration) --- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/assembly-ai/slam-1.md # slam-1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `aai/slam-1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A new Speech-to-Text model offering exceptional accuracy by leveraging its deep understanding of context and semantics (English only). {% hint style="success" %} This model use per-second billing. The cost of audio transcription is based on the number of seconds in the input audio file, not the processing time. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["aai/slam-1"]},"audio":{"type":"object","properties":{"buffer":{"nullable":true},"mimetype":{"type":"string"},"size":{"type":"integer"},"originalname":{"type":"string"},"encoding":{"type":"string"},"fieldname":{"type":"string"}},"required":["mimetype","originalname","encoding","fieldname"],"description":"The audio file to transcribe."},"audio_start_from":{"type":"integer","description":"The point in time, in milliseconds, in the file at which the transcription was started."},"audio_end_at":{"type":"integer","description":"The point in time, in milliseconds, in the file at which the transcription was terminated."},"language_code":{"type":"string","description":"The language of your audio file. Possible values are found in Supported Languages. The default value is 'en_us'."},"language_confidence_threshold":{"type":"number","minimum":0,"maximum":1,"description":"The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold. Defaults to 0."},"language_detection":{"type":"boolean","description":"Enable Automatic language detection, either true or false. Available for universal model only."},"punctuate":{"type":"boolean","nullable":true,"default":null,"description":"Adds punctuation and capitalization to the transcript"},"format_text":{"type":"boolean","default":true,"description":"Enable Text Formatting, can be true or false."},"disfluencies":{"type":"boolean","default":false,"description":"Transcribe Filler Words, like \"umm\", in your media file; can be true or false."},"multichannel":{"type":"boolean","default":false,"description":"Enable Multichannel transcription, can be true or false."},"speaker_labels":{"type":"boolean","nullable":true,"default":null,"description":"Enable Speaker diarization, can be true or false."},"speakers_expected":{"type":"integer","nullable":true,"default":null,"description":"Tell the speaker label model how many speakers it should attempt to identify. See Speaker diarization for more details."},"content_safety":{"type":"boolean","default":false,"description":"Enable Content Moderation, can be true or false."},"iab_categories":{"type":"boolean","default":false,"description":"Enable Topic Detection, can be true or false."},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]},"description":"Customize how words are spelled and formatted using to and from values."},"auto_highlights":{"type":"boolean","default":false,"description":"Enable Key Phrases, either true or false."},"word_boost":{"type":"array","items":{"type":"string"},"description":"The list of custom vocabulary to boost transcription probability for."},"boost_param":{"type":"string","enum":["low","default","high"],"description":"How much to boost specified words. Allowed values: low, default, high."},"filter_profanity":{"type":"boolean","default":false,"description":"Filter profanity from the transcribed text, can be true or false."},"redact_pii":{"type":"boolean","default":false,"description":"Redact PII from the transcribed text using the Redact PII model, can be true or false."},"redact_pii_audio":{"type":"boolean","default":false,"description":"Generate a copy of the original media file with spoken PII \"beeped\" out, can be true or false. See PII redaction for more details."},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"],"description":"Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See PII redaction for more details."},"redact_pii_policies":{"type":"array","items":{"type":"string","enum":["account_number","banking_information","blood_type","credit_card_cvv","credit_card_expiration","credit_card_number","date","date_interval","date_of_birth","drivers_license","drug","duration","email_address","event","filename","gender_sexuality","healthcare_number","injury","ip_address","language","location","marital_status","medical_condition","medical_process","money_amount","nationality","number_sequence","occupation","organization","passport_number","password","person_age","person_name","phone_number","physical_attribute","political_affiliation","religion","statistics","time","url","us_social_security_number","username","vehicle_id","zodiac_sign"]},"description":"The list of PII Redaction policies to enable. See PII redaction for more details."},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"],"description":"The replacement logic for detected PII, can be `entity_type` or `hash`. See PII redaction for more details."},"sentiment_analysis":{"type":"boolean","default":false,"description":"Enable Sentiment Analysis, can be true or false."},"entity_detection":{"type":"boolean","default":false,"description":"Enable Entity Detection, can be true or false."},"summarization":{"type":"boolean","default":false,"description":"Enable Summarization, can be true or false."},"summary_model":{"type":"string","enum":["informative","conversational","catchy"],"description":"The model to summarize the transcript. Allowed values: informative, conversational, catchy."},"summary_type":{"type":"string","enum":["bullets","bullets_verbose","gist","headline","paragraph"],"description":"The type of summary. Allowed values: bullets, bullets_verbose, gist, headline, paragraph."},"auto_chapters":{"type":"boolean","default":false,"description":"Enable Auto Chapters, either true or false."},"speech_threshold":{"type":"number","minimum":0,"maximum":1,"description":"Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive."}},"required":["model","audio"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Quick Example: Processing a Speech Audio File via URL Let's transcribe the following audio fragment: {% embed url="" %} {% code overflow="wrap" %} ```python import time import requests import json # for getting a structured output with indentation base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "aai/slam-1", "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]["text"]) # Uncomment the line below to print the entire "result" object with all service data # print("Processing complete:\n", json.dumps(response_data["result"], indent=2, ensure_ascii=False)) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ```json5 {'generation_id': '227b2ca6-72a6-4e82-906d-957ba03e470f'} Still waiting... Checking again in 10 seconds. Processing complete:\n { "id": "51d9be59-2180-407f-93e1-ea3c3dec7fcd", "language_model": "assemblyai_default", "acoustic_model": "assemblyai_default", "language_code": "en_us", "status": "completed", "audio_url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3", "text": "He doesn't belong to you, and I don't see how you have anything to do with what is be his power, if he possess only that from this stage to you.", "words": [ { "text": "He", "start": 320, "end": 400, "confidence": 0.8894227, "speaker": null }, { "text": "doesn't", "start": 480, "end": 960, "confidence": 0.85873646, "speaker": null }, { "text": "belong", "start": 960, "end": 1360, "confidence": 0.98418343, "speaker": null }, { "text": "to", "start": 1440, "end": 1520, "confidence": 0.9947456, "speaker": null }, { "text": "you,", "start": 1600, "end": 1680, "confidence": 0.542386, "speaker": null }, { "text": "and", "start": 1920, "end": 2000, "confidence": 0.99181706, "speaker": null }, { "text": "I", "start": 2160, "end": 2240, "confidence": 0.9949956, "speaker": null }, { "text": "don't", "start": 2240, "end": 2560, "confidence": 0.9778317, "speaker": null }, { "text": "see", "start": 2560, "end": 2640, "confidence": 0.9933328, "speaker": null }, { "text": "how", "start": 2800, "end": 2880, "confidence": 0.9756232, "speaker": null }, { "text": "you", "start": 3120, "end": 3200, "confidence": 0.9898425, "speaker": null }, { "text": "have", "start": 3360, "end": 3440, "confidence": 0.9754379, "speaker": null }, { "text": "anything", "start": 3600, "end": 3680, "confidence": 0.9352868, "speaker": null }, { "text": "to", "start": 4080, "end": 4160, "confidence": 0.99539536, "speaker": null }, { "text": "do", "start": 4160, "end": 4320, "confidence": 0.994307, "speaker": null }, { "text": "with", "start": 4400, "end": 4480, "confidence": 0.9825462, "speaker": null }, { "text": "what", "start": 4560, "end": 4640, "confidence": 0.9361658, "speaker": null }, { "text": "is", "start": 4800, "end": 4880, "confidence": 0.9499776, "speaker": null }, { "text": "be", "start": 4960, "end": 5040, "confidence": 0.74536353, "speaker": null }, { "text": "his", "start": 5120, "end": 5280, "confidence": 0.98388886, "speaker": null }, { "text": "power,", "start": 5360, "end": 5440, "confidence": 0.15106322, "speaker": null }, { "text": "if", "start": 5600, "end": 5680, "confidence": 0.22255379, "speaker": null }, { "text": "he", "start": 5920, "end": 6000, "confidence": 0.3464594, "speaker": null }, { "text": "possess", "start": 6080, "end": 6640, "confidence": 0.094453804, "speaker": null }, { "text": "only", "start": 6640, "end": 6720, "confidence": 0.83083403, "speaker": null }, { "text": "that", "start": 6880, "end": 6960, "confidence": 0.9876517, "speaker": null }, { "text": "from", "start": 7120, "end": 7200, "confidence": 0.9683188, "speaker": null }, { "text": "this", "start": 7200, "end": 7280, "confidence": 0.9067986, "speaker": null }, { "text": "stage", "start": 7440, "end": 7680, "confidence": 0.9634684, "speaker": null }, { "text": "to", "start": 7920, "end": 8000, "confidence": 0.9013573, "speaker": null }, { "text": "you.", "start": 8080, "end": 8160, "confidence": 0.7715247, "speaker": null } ], "utterances": null, "confidence": 0.83341193, "audio_duration": 11, "punctuate": true, "format_text": true, "dual_channel": null, "webhook_url": null, "webhook_status_code": null, "webhook_auth": false, "webhook_auth_header_name": null, "speed_boost": false, "auto_highlights_result": null, "auto_highlights": false, "audio_start_from": null, "audio_end_at": null, "word_boost": [], "boost_param": null, "prompt": null, "keyterms_prompt": [], "filter_profanity": false, "redact_pii": false, "redact_pii_audio": false, "redact_pii_audio_quality": null, "redact_pii_audio_options": null, "redact_pii_policies": null, "redact_pii_sub": null, "speaker_labels": false, "speaker_options": null, "content_safety": false, "iab_categories": false, "content_safety_labels": { "status": "unavailable", "results": [], "summary": {} }, "iab_categories_result": { "status": "unavailable", "results": [], "summary": {} }, "language_detection": false, "language_detection_options": null, "language_confidence_threshold": null, "language_confidence": null, "custom_spelling": null, "throttled": false, "auto_chapters": false, "summarization": false, "summary_type": null, "summary_model": null, "custom_topics": false, "topics": [], "speech_threshold": null, "speech_model": "slam-1", "chapters": null, "disfluencies": false, "entity_detection": false, "sentiment_analysis": false, "sentiment_analysis_results": null, "entities": null, "speakers_expected": null, "summary": null, "custom_topics_results": null, "is_deleted": null, "multichannel": null, "project_id": 675898, "token_id": 1245789 } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar-pro.md # sonar-pro {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `perplexity/sonar-pro` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Pro version is built for real-time, web-connected research and complex queries. Handles multi-step, deeper reasoning tasks. Retrieves and synthesizes multiple web searches, yielding more detailed answers. Delivers 2× more citations than standard [Sonar](https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar) for enhanced traceability. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["perplexity/sonar-pro"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","pattern":"^[A-Z]{2}$","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"search_mode":{"type":"string","enum":["academic","web"],"default":"academic","description":"Controls the search mode used for the request. When set to 'academic', results will prioritize scholarly sources like peer-reviewed papers and academic journals."},"search_domain_filter":{"type":"array","items":{"type":"string"},"description":"A list of domains to limit search results to. Currently limited to 10 domains for Allowlisting and Denylisting. For Denylisting, add a - at the beginning of the domain string."},"return_images":{"type":"boolean","default":false,"description":"Determines whether search results should include images."},"return_related_questions":{"type":"boolean","default":false,"description":"Determines whether related questions should be returned."},"search_recency_filter":{"type":"string","enum":["day","week","month","year"],"description":"Filters search results based on time (e.g., 'week', 'day')."},"search_after_date_filter":{"type":"string","pattern":"^(0?[1-9]|1[0-2])\\/(0?[1-9]|[12]\\d|3[01])\\/\\d{4}$","description":"Filters search results to only include content published after this date. Format should be %m/%d/%Y (e.g. 3/1/2025)"},"search_before_date_filter":{"type":"string","pattern":"^(0?[1-9]|1[0-2])\\/(0?[1-9]|[12]\\d|3[01])\\/\\d{4}$","description":"Filters search results to only include content published before this date. Format should be %m/%d/%Y (e.g. 3/1/2025)"},"last_updated_after_filter":{"type":"string","pattern":"^(0?[1-9]|1[0-2])\\/(0?[1-9]|[12]\\d|3[01])\\/\\d{4}$","description":"Filters search results to only include content last updated after this date. Format should be %m/%d/%Y (e.g. 3/1/2025)"},"last_updated_before_filter":{"type":"string","pattern":"^(0?[1-9]|1[0-2])\\/(0?[1-9]|[12]\\d|3[01])\\/\\d{4}$","description":"Filters search results to only include content last updated before this date. Format should be %m/%d/%Y (e.g. 3/1/2025)"}},"required":["model","messages"],"title":"perplexity/sonar-pro"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"perplexity/sonar-pro", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'perplexity/sonar-pro', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "14884548-2103-493c-a69d-7585f36f1c80", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "**Hello** is primarily an English salutation or greeting, first recorded in written form in 1826[1]. It is commonly used to initiate conversation or acknowledge someone's presence.\n\nThe term has notable cultural associations:\n- Students often use \"Hello, World!\" as the first output when learning programming languages—a tradition established by its inclusion in influential programming textbooks[1].\n- \"Hello\" is also the title of notable songs, including Adele’s chart-topping 2015 single and Lionel Richie’s 1984 hit[2][3][4].\n\nAlternative cultural greetings include \"Aloha,\" \"Ciao,\" and \"Namaste,\" among others[1]." }, "delta": { "role": "assistant", "content": "" } } ], "created": 1753467346, "model": "sonar-pro", "usage": { "prompt_tokens": 12606, "completion_tokens": 4221, "total_tokens": 16827, "search_context_size": "low" }, "citations": [ "https://en.wikipedia.org/wiki/Hello", "https://www.youtube.com/watch?v=YQHsXMglC9A", "https://en.wikipedia.org/wiki/Hello_(Adele_song)", "https://www.youtube.com/watch?v=mHONNcZbwDY", "https://www.hello-products.com" ], "search_results": [ { "title": "Hello - Wikipedia", "url": "https://en.wikipedia.org/wiki/Hello", "date": "2002-06-09", "last_updated": "2025-07-23" }, { "title": "Adele - Hello (Official Music Video) - YouTube", "url": "https://www.youtube.com/watch?v=YQHsXMglC9A", "date": "2015-10-22", "last_updated": "2025-07-07" }, { "title": "Hello (Adele song) - Wikipedia", "url": "https://en.wikipedia.org/wiki/Hello_(Adele_song)", "date": "2015-10-22", "last_updated": "2025-06-13" }, { "title": "Lionel Richie - Hello (Official Music Video) - YouTube", "url": "https://www.youtube.com/watch?v=mHONNcZbwDY", "date": "2020-11-20", "last_updated": "2025-07-07" }, { "title": "Hello Products", "url": "https://www.hello-products.com", "date": "2025-06-04", "last_updated": "2025-06-16" } ] } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar.md # sonar {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `perplexity/sonar` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A model built on top of Llama 3.3 70B and optimized for Perplexity search. Fast, cost-effective, everyday search and Q\&A. Ideal for simple queries, topic summaries, and fact-checking. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["perplexity/sonar"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"temperature":{"type":"number","minimum":0,"maximum":2,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"logit_bias":{"type":"object","nullable":true,"additionalProperties":{"type":"number","minimum":-100,"maximum":100},"description":"Modify the likelihood of specified tokens appearing in the completion.\n \n Accepts a JSON object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token."},"frequency_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"web_search_options":{"type":"object","properties":{"search_context_size":{"type":"string","enum":["low","medium","high"],"description":"High level guidance for the amount of context window space to use for the search. One of low, medium, or high. medium is the default."},"user_location":{"type":"object","nullable":true,"properties":{"approximate":{"type":"object","properties":{"city":{"type":"string","description":"Free text input for the city of the user, e.g. San Francisco."},"country":{"type":"string","pattern":"^[A-Z]{2}$","description":"The two-letter ISO country code of the user, e.g. US."},"region":{"type":"string","description":"Free text input for the region of the user, e.g. California."},"timezone":{"type":"string","description":"The IANA timezone of the user, e.g. America/Los_Angeles."}},"description":"Approximate location parameters for the search."},"type":{"type":"string","enum":["approximate"],"description":"The type of location approximation. Always approximate."}},"required":["approximate","type"],"description":"Approximate location parameters for the search."}},"description":"This tool searches the web for relevant results to use in a response."},"top_k":{"type":"number","description":"Only sample from the top K options for each subsequent token. Used to remove \"long tail\" low probability responses. Recommended for advanced use cases only. You usually only need to use temperature."},"search_mode":{"type":"string","enum":["academic","web"],"default":"academic","description":"Controls the search mode used for the request. When set to 'academic', results will prioritize scholarly sources like peer-reviewed papers and academic journals."},"search_domain_filter":{"type":"array","items":{"type":"string"},"description":"A list of domains to limit search results to. Currently limited to 10 domains for Allowlisting and Denylisting. For Denylisting, add a - at the beginning of the domain string."},"return_images":{"type":"boolean","default":false,"description":"Determines whether search results should include images."},"return_related_questions":{"type":"boolean","default":false,"description":"Determines whether related questions should be returned."},"search_recency_filter":{"type":"string","enum":["day","week","month","year"],"description":"Filters search results based on time (e.g., 'week', 'day')."},"search_after_date_filter":{"type":"string","pattern":"^(0?[1-9]|1[0-2])\\/(0?[1-9]|[12]\\d|3[01])\\/\\d{4}$","description":"Filters search results to only include content published after this date. Format should be %m/%d/%Y (e.g. 3/1/2025)"},"search_before_date_filter":{"type":"string","pattern":"^(0?[1-9]|1[0-2])\\/(0?[1-9]|[12]\\d|3[01])\\/\\d{4}$","description":"Filters search results to only include content published before this date. Format should be %m/%d/%Y (e.g. 3/1/2025)"},"last_updated_after_filter":{"type":"string","pattern":"^(0?[1-9]|1[0-2])\\/(0?[1-9]|[12]\\d|3[01])\\/\\d{4}$","description":"Filters search results to only include content last updated after this date. Format should be %m/%d/%Y (e.g. 3/1/2025)"},"last_updated_before_filter":{"type":"string","pattern":"^(0?[1-9]|1[0-2])\\/(0?[1-9]|[12]\\d|3[01])\\/\\d{4}$","description":"Filters search results to only include content last updated before this date. Format should be %m/%d/%Y (e.g. 3/1/2025)"}},"required":["model","messages"],"title":"perplexity/sonar"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"perplexity/sonar", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'perplexity/sonar', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "541db1f4-d5ef-4e65-9474-a10843fa92ab", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello is a common English salutation or greeting, first recorded in writing in 1826 in the United States. It has since become widely used in spoken and written communication as a way to say \"hi\" or initiate conversation[1]. \n\nAdditionally, \"Hello\" is the title of well-known songs, such as Adele's 2015 hit and Lionel Richie's classic, both of which have contributed to the cultural popularity of the word[2][3]. \n\nIn other contexts, \"Hello\" is a brand name used by companies such as a vegan-friendly personal care products line and the meal kit service HelloFresh, showing its versatile use beyond just a greeting[4][5]." }, "delta": { "role": "assistant", "content": "" } } ], "created": 1753461943, "model": "sonar", "usage": { "prompt_tokens": 10502, "completion_tokens": 292, "total_tokens": 10794, "search_context_size": "low" }, "citations": [ "https://en.wikipedia.org/wiki/Hello", "https://en.wikipedia.org/wiki/Hello_(Adele_song)", "https://www.youtube.com/watch?v=mHONNcZbwDY", "https://www.hello-products.com", "https://www.hellofresh.com" ], "search_results": [ { "title": "Hello - Wikipedia", "url": "https://en.wikipedia.org/wiki/Hello", "date": "2002-06-09", "last_updated": "2025-07-23" }, { "title": "Hello (Adele song) - Wikipedia", "url": "https://en.wikipedia.org/wiki/Hello_(Adele_song)", "date": "2015-10-22", "last_updated": "2025-06-13" }, { "title": "Lionel Richie - Hello (Official Music Video) - YouTube", "url": "https://www.youtube.com/watch?v=mHONNcZbwDY", "date": "2020-11-20", "last_updated": "2025-07-07" }, { "title": "Hello Products", "url": "https://www.hello-products.com", "date": "2025-06-04", "last_updated": "2025-06-16" }, { "title": "HelloFresh® Meal Kits | Get 10 Free Meals + Free Breakfast For Life", "url": "https://www.hellofresh.com", "date": "2024-09-19", "last_updated": "2025-05-13" } ] } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/openai/sora-2-i2v.md # sora-2-i2v {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/sora-2-i2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Sora 2 is new powerful media generation model, generating videos with synced audio. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/sora-2-i2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A URL or a Base64-encoded image file used as the initial frame for video generation.\nThe image dimensions must match the selected video resolution and aspect ratio.\nSupported configurations include:\n720p with aspect ratios:\n- 16:9 — 1280x720\n- 9:16 — 720x1280\n\n1080p with aspect ratios:\n- 16:9 — 1792x1024\n- 9:16 — 1024x1792"},"resolution":{"type":"string","enum":["720p"],"default":"720p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,8,12],"default":"4"}},"required":["model","prompt","image_url"],"title":"openai/sora-2-i2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "openai/sora-2-i2v", "prompt": "She turns around and smiles, then slowly walks out of the frame.", "image_url": "https://cdn.openai.com/API/docs/images/sora/woman_skyline_original_720p.jpeg", "resolution": "720p", "duration": 4 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ''; // Creating and sending a video generation task to the server async function generateVideo() { const url = 'https://api.aimlapi.com/v2/video/generations'; const data = { model: 'openai/sora-2-i2v', prompt: 'She turns around and smiles, then slowly walks out of the frame.', image_url: 'https://cdn.openai.com/API/docs/images/sora/woman_skyline_original_720p.jpeg', resolution: '720p', aspect_ratio: '16:9', duration: 4, }; try { const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error('Request failed:', error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL('https://api.aimlapi.com/v2/video/generations'); url.searchParams.append('generation_id', genId); try { const response = await fetch(url, { method: 'GET', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, }); return await response.json(); } catch (error) { console.error('Error fetching video:', error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log('Generation ID:', genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error('Error: No response from API'); break; } const status = responseData.status; console.log('Status:', status); if (['waiting', 'active', 'queued', 'generating'].includes(status)) { console.log('Still waiting... Checking again in 10 seconds.'); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log('Processing complete:\n', responseData); return responseData; } } console.log('Timeout reached. Stopping.'); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Statuses
StatusDescription
queuedJob is waiting in queue
generatingVideo is being generated
completedGeneration successful, video available
errorGeneration failed, check error field
Response {% code overflow="wrap" %} ```json5 Generation ID: video_68e572c03d08819188f79439891f6f280590a37b858cba30:openai/sora-2-i2v Status: generating Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. ... Processing complete: { id: 'video_68e572c03d08819188f79439891f6f280590a37b858cba30:openai/sora-2-i2v', status: 'completed', video: { url: 'https://cdn.aimlapi.com/generations/hedgehog/1759867684272-d7a19473-8f19-421e-9bde-cf1d206c68e5.mp4' } } ``` {% endcode %}
**Processing time**: \~1.5 min. **Low-res GIF preview**:

"She turns around and smiles, then slowly walks out of the frame."

--- # Source: https://docs.aimlapi.com/api-references/video-models/openai/sora-2-pro-i2v.md # sora-2-pro-i2v {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/sora-2-pro-i2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Sora 2 Pro is state-of-the-art, most advanced media generation model, generating videos with synced audio. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/sora-2-pro-i2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A URL or a Base64-encoded image file used as the initial frame for video generation.\nThe image dimensions must match the selected video resolution and aspect ratio.\nSupported configurations include:\n720p with aspect ratios:\n- 16:9 — 1280x720\n- 9:16 — 720x1280\n\n1080p with aspect ratios:\n- 16:9 — 1792x1024\n- 9:16 — 1024x1792"},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,8,12],"default":"4"}},"required":["model","prompt","image_url"],"title":"openai/sora-2-pro-i2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "openai/sora-2-pro-i2v", "prompt": "She turns around and smiles, then slowly walks out of the frame.", "image_url": "https://cdn.openai.com/API/docs/images/sora/woman_skyline_original_720p.jpeg", "resolution": "720p", "duration": 4 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ''; // Creating and sending a video generation task to the server async function generateVideo() { const url = 'https://api.aimlapi.com/v2/video/generations'; const data = { model: 'openai/sora-2-pro-i2v', prompt: 'She turns around and smiles, then slowly walks out of the frame.', image_url: 'https://cdn.openai.com/API/docs/images/sora/woman_skyline_original_720p.jpeg', resolution: '720p', aspect_ratio: '16:9', duration: 4, }; try { const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error('Request failed:', error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL('https://api.aimlapi.com/v2/video/generations'); url.searchParams.append('generation_id', genId); try { const response = await fetch(url, { method: 'GET', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, }); return await response.json(); } catch (error) { console.error('Error fetching video:', error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log('Generation ID:', genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error('Error: No response from API'); break; } const status = responseData.status; console.log('Status:', status); if (['waiting', 'active', 'queued', 'generating'].includes(status)) { console.log('Still waiting... Checking again in 10 seconds.'); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log('Processing complete:\n', responseData); return responseData; } } console.log('Timeout reached. Stopping.'); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Statuses
StatusDescription
queuedJob is waiting in queue
generatingVideo is being generated
completedGeneration successful, video available
errorGeneration failed, check error field
Response {% code overflow="wrap" %} ```json5 Generation ID: video_68e57b98ad70819182a47a135a5fbcc407c36053d7b22880:openai/sora-2-pro-i2v Status: generating Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. ... Processing complete: { id: 'video_68e57b98ad70819182a47a135a5fbcc407c36053d7b22880:openai/sora-2-pro-i2v', status: 'completed', video: { url: 'https://cdn.aimlapi.com/generations/hedgehog/1759870057414-7a63e416-a2c0-4497-a55a-11665b7e1c17.mp4' } } ``` {% endcode %}
**Processing time**: \~1.5 min. **Low-res GIF preview**:

"She turns around and smiles, then slowly walks out of the frame."

--- # Source: https://docs.aimlapi.com/api-references/video-models/openai/sora-2-pro-t2v.md # sora-2-pro-t2v {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/sora-2-pro-t2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Sora 2 Pro is state-of-the-art, most advanced media generation model, generating videos with synced audio. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/sora-2-pro-t2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,8,12],"default":"4"}},"required":["model","prompt"],"title":"openai/sora-2-pro-t2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation takes about 80–90 seconds for a 4-second 1080p video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "openai/sora-2-pro-t2v", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", "resolution": "1080p", "duration": 4 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "openai/sora-2-pro-t2v", prompt: "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", resolution: "720p", duration: 4, }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Statuses
StatusDescription
queuedJob is waiting in queue
generatingVideo is being generated
completedGeneration successful, video available
errorGeneration failed, check error field
Response {% code overflow="wrap" %} ```json5 Generation ID: video_68e56d5e6ca88191b0aa3d18416ebdd3088ac20c82665bd8:openai/sora-2-t2v Status: generating Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: { id: 'video_68e576d4704c819088a4fb1315dc0667062ac79f6ced74ae:openai/sora-2-pro-t2v', status: 'completed', video: { url: 'https://cdn.aimlapi.com/generations/hedgehog/1759868837791-a28d3ad2-df8e-4016-bec2-9db9ff78d038.mp4' } } ``` {% endcode %}
**Processing time**: \~1 min 14 sec. **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain, then rushes
toward the camera with its jaws open, revealing massive fangs. We see it's coming."

--- # Source: https://docs.aimlapi.com/api-references/video-models/openai/sora-2-t2v.md # sora-2-t2v {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/sora-2-t2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Sora 2 is new powerful media generation model, generating videos with synced audio. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["openai/sora-2-t2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["720p"],"default":"720p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,8,12],"default":"4"}},"required":["model","prompt"],"title":"openai/sora-2-t2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation takes about 50–60 seconds for a 4-second 720p video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "openai/sora-2-t2v", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", "resolution": "720p", "duration": 4 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "openai/sora-2-t2v", prompt: "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", resolution: "720p", duration: 4, }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Statuses
StatusDescription
queuedJob is waiting in queue
generatingVideo is being generated
completedGeneration successful, video available
errorGeneration failed, check error field
Response {% code overflow="wrap" %} ```json5 Generation ID: video_68e56d5e6ca88191b0aa3d18416ebdd3088ac20c82665bd8:openai/sora-2-t2v Status: generating Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: { id: 'video_68e56d5e6ca88191b0aa3d18416ebdd3088ac20c82665bd8:openai/sora-2-t2v', status: 'completed', video: { url: 'https://cdn.aimlapi.com/generations/hedgehog/1759866285599-0cdfb138-c03a-49d4-a601-4f6413e27b15.mp4' } } ``` {% endcode %}
**Processing time**: \~1 min 14 sec. **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain, then rushes
toward the camera with its jaws open, revealing massive fangs. We see it's coming."

--- # Source: https://docs.aimlapi.com/api-references/speech-models/voice-chat/minimax/speech-2.5-hd-preview.md # Speech 2.5 HD Preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/speech-2-5-hd-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A high-definition text-to-speech model with enhanced multilingual expressiveness, more precise voice replication, and expanded support for 40 languages. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AIML API Key instead of : "Authorization": "Bearer ", } payload = { "model": "minimax/speech-2.5-turbo-preview", "text": "Hi! What are you doing today?", "voice_setting": { "voice_id": "Wise_Woman" } } response = requests.post(url, headers=headers, json=payload, stream=True) dist = os.path.abspath("your_file_name.wav") with open(dist, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", dist) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript import fs from "fs"; import path from "path"; async function main() { const url = "https://api.aimlapi.com/v1/tts"; const payload = { model: "minimax/speech-2.5-hd-preview", text: "Hi! What are you doing today?", voice_setting: { voice_id: "Wise_Woman" } }; const response = await fetch(url, { method: "POST", headers: { // Insert your AIML API Key instead of : "Authorization": `Bearer `, "Content-Type": "application/json" }, body: JSON.stringify(payload) }); // Read response as ArrayBuffer and convert to Buffer const arrayBuffer = await response.arrayBuffer(); const buffer = Buffer.from(arrayBuffer); // Save audio to file in the current working directory const dist = path.join(process.cwd(), "your_file_name.wav"); fs.writeFileSync(dist, buffer); console.log("Audio saved to:", dist); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` Audio saved to: c:\Users\user\Documents\Python Scripts\TTSes\your_file_name.wav ``` {% endcode %}
{% embed url="" %} ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["minimax/speech-2.5-hd-preview"]},"text":{"type":"string","minLength":1,"maxLength":5000,"description":"The text content to be converted to speech."},"voice_setting":{"type":"object","properties":{"voice_id":{"anyOf":[{"type":"string","enum":["Wise_Woman","Friendly_Person","Inspirational_girl","Deep_Voice_Man","Calm_Woman","Casual_Guy","Lively_Girl","Patient_Man","Young_Knight","Determined_Man","Lovely_Girl","Decent_Boy","Imposing_Manner","Elegant_Man","Abbess","Sweet_Girl_2","Exuberant_Girl"]},{"type":"string","minLength":1,"maxLength":64}],"default":"Wise_Woman","description":"A predefined system voice for text-to-speech synthesis."},"speed":{"type":"number","minimum":0.5,"maximum":2,"default":1,"description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."},"vol":{"type":"number","minimum":0.01,"maximum":10,"default":1,"description":"The volume of the generated speech. Range: (0, 10]. Larger values indicate larger volumes."},"pitch":{"type":"number","minimum":-12,"maximum":12,"default":0,"description":"The pitch of the generated speech. Range: [-12, 12]. 0 = default voice output."},"emotion":{"type":"string","enum":["happy","sad","angry","fearful","disgusted","surprised","neutral"],"description":"Emotional tone to apply to the synthesized speech. Controls the emotional expression of the generated voice output."},"text_normalization":{"type":"boolean","default":false,"description":"English text normalization support. Improves number-reading but increases latency."}},"default":{"voice_id":"Wise_Woman"},"description":"Voice settings overriding stored settings for the given voice. They are applied only on the given request."},"audio_setting":{"type":"object","properties":{"sample_rate":{"type":"integer","description":"Audio sample rate in Hz.","enum":[8000,16000,22050,24000,32000,44100]},"bitrate":{"type":"integer","description":"Audio bitrate in bits per second. Controls the compression level and audio quality. Higher bitrates provide better quality but larger file sizes.","enum":[32000,64000,128000,256000]},"format":{"type":"string","enum":["mp3","pcm","flac"],"default":"mp3","description":"Audio output format. MP3 provides good compression and compatibility, PCM offers uncompressed high quality, and FLAC provides lossless compression."},"channel":{"type":"integer","description":"Number of audio channels. 1 for mono (single channel), 2 for stereo (dual channel) output.","enum":[1,2]}},"description":"Audio output configuration"},"pronunciation_dict":{"type":"object","properties":{"tone":{"type":"array","items":{"type":"string"},"description":"Replacement of text and pronunciations. Format: [\"燕少飞/(yan4)(shao3)(fei1)\", \"达菲/(da2)(fei1)\", \"omg/oh my god\"]"}},"required":["tone"],"description":"Custom pronunciation dictionary for handling specific words or phrases. Allows fine-tuning of how certain text should be pronounced using phonetic representations."},"timbre_weights":{"type":"array","items":{"type":"object","properties":{"voice_id":{"anyOf":[{"type":"string","enum":["Wise_Woman","Friendly_Person","Inspirational_girl","Deep_Voice_Man","Calm_Woman","Casual_Guy","Lively_Girl","Patient_Man","Young_Knight","Determined_Man","Lovely_Girl","Decent_Boy","Imposing_Manner","Elegant_Man","Abbess","Sweet_Girl_2","Exuberant_Girl"]},{"type":"string","minLength":1,"maxLength":64}],"description":"A predefined system voice for text-to-speech synthesis."},"weight":{"type":"integer","minimum":1,"maximum":100,"description":"Weight for voice mixing. Range: [1, 100]. Higher weights are sampled more heavily."}},"required":["voice_id","weight"]},"maxItems":4,"description":"Voice mixing configuration allowing combination of up to 4 different voices with specified weights. Each voice contributes to the final output based on its weight value (1-100)."},"stream":{"type":"boolean","default":false,"description":"Enable streaming mode for real-time audio generation. When enabled, audio is generated and delivered in chunks as it's processed."},"language_boost":{"type":"string","enum":["Chinese","Chinese,Yue","English","Arabic","Russian","Spanish","French","Portuguese","German","Turkish","Dutch","Ukrainian","Vietnamese","Indonesian","Japanese","Italian","Korean","Thai","Polish","Romanian","Greek","Czech","Finnish","Hindi","Bulgarian","Danish","Hebrew","Malay","Persian","Slovak","Swedish","Croatian","Filipino","Hungarian","Norwegian","Slovenian","Catalan","Nynorsk","Tamil","Afrikaans","auto"],"description":"Language recognition enhancement option."},"voice_modify":{"type":"object","properties":{"pitch":{"type":"integer","minimum":-100,"maximum":100,"description":"Pitch level (-100 to 100)"},"intensity":{"type":"integer","minimum":-100,"maximum":100,"description":"Intensity level (-100 to 100)"},"timbre":{"type":"integer","minimum":-100,"maximum":100,"description":"Timbre level (-100 to 100)"},"sound_effects":{"type":"string","enum":["spacious_echo","auditorium_echo","lofi_telephone","robotic"],"description":"Audio effects to apply to the synthesized speech. Includes options like spacious_echo, auditorium_echo, lofi_telephone, and robotic effects."}},"description":"Voice modification settings for adjusting pitch, intensity, timbre, and applying sound effects to customize the voice characteristics."},"subtitle_enable":{"type":"boolean","default":false,"description":"Enable subtitle generation service. Only available for non-streaming requests. Generates timing information for the synthesized speech."},"output_format":{"type":"string","enum":["url","hex"],"default":"hex","description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/speech-models/voice-chat/minimax/speech-2.5-turbo-preview.md # Speech 2.5 Turbo Preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/speech-2.5-turbo-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A high-definition text-to-speech model with enhanced multilingual expressiveness, more precise voice replication, and expanded support for 40 languages. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## Quick Code Example Here is an example of generating an audio response to the user input provided in the `text` parameter. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AIML API Key instead of : "Authorization": "Bearer ", } payload = { "model": "minimax/speech-2.5-turbo-preview", "text": "Hi! What are you doing today?", "voice_setting": { "voice_id": "Wise_Woman" } } response = requests.post(url, headers=headers, json=payload, stream=True) dist = os.path.abspath("your_file_name.wav") with open(dist, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", dist) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript import fs from "fs"; import path from "path"; async function main() { const url = "https://api.aimlapi.com/v1/tts"; const payload = { model: "minimax/speech-2.5-turbo-preview", text: "Hi! What are you doing today?", voice_setting: { voice_id: "Wise_Woman" } }; const response = await fetch(url, { method: "POST", headers: { // Insert your AIML API Key instead of : "Authorization": `Bearer `, "Content-Type": "application/json" }, body: JSON.stringify(payload) }); // Read response as ArrayBuffer and convert to Buffer const arrayBuffer = await response.arrayBuffer(); const buffer = Buffer.from(arrayBuffer); // Save audio to file in the current working directory const dist = path.join(process.cwd(), "your_file_name.wav"); fs.writeFileSync(dist, buffer); console.log("Audio saved to:", dist); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response ``` Audio saved to: c:\Users\user\Documents\Python Scripts\TTSes\your_file_name.wav ```
{% embed url="" %} ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["minimax/speech-2.5-turbo-preview"]},"text":{"type":"string","minLength":1,"maxLength":5000,"description":"The text content to be converted to speech."},"voice_setting":{"type":"object","properties":{"voice_id":{"anyOf":[{"type":"string","enum":["Wise_Woman","Friendly_Person","Inspirational_girl","Deep_Voice_Man","Calm_Woman","Casual_Guy","Lively_Girl","Patient_Man","Young_Knight","Determined_Man","Lovely_Girl","Decent_Boy","Imposing_Manner","Elegant_Man","Abbess","Sweet_Girl_2","Exuberant_Girl"]},{"type":"string","minLength":1,"maxLength":64}],"default":"Wise_Woman","description":"A predefined system voice for text-to-speech synthesis."},"speed":{"type":"number","minimum":0.5,"maximum":2,"default":1,"description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."},"vol":{"type":"number","minimum":0.01,"maximum":10,"default":1,"description":"The volume of the generated speech. Range: (0, 10]. Larger values indicate larger volumes."},"pitch":{"type":"number","minimum":-12,"maximum":12,"default":0,"description":"The pitch of the generated speech. Range: [-12, 12]. 0 = default voice output."},"emotion":{"type":"string","enum":["happy","sad","angry","fearful","disgusted","surprised","neutral"],"description":"Emotional tone to apply to the synthesized speech. Controls the emotional expression of the generated voice output."},"text_normalization":{"type":"boolean","default":false,"description":"English text normalization support. Improves number-reading but increases latency."}},"default":{"voice_id":"Wise_Woman"},"description":"Voice settings overriding stored settings for the given voice. They are applied only on the given request."},"audio_setting":{"type":"object","properties":{"sample_rate":{"type":"integer","description":"Audio sample rate in Hz.","enum":[8000,16000,22050,24000,32000,44100]},"bitrate":{"type":"integer","description":"Audio bitrate in bits per second. Controls the compression level and audio quality. Higher bitrates provide better quality but larger file sizes.","enum":[32000,64000,128000,256000]},"format":{"type":"string","enum":["mp3","pcm","flac"],"default":"mp3","description":"Audio output format. MP3 provides good compression and compatibility, PCM offers uncompressed high quality, and FLAC provides lossless compression."},"channel":{"type":"integer","description":"Number of audio channels. 1 for mono (single channel), 2 for stereo (dual channel) output.","enum":[1,2]}},"description":"Audio output configuration"},"pronunciation_dict":{"type":"object","properties":{"tone":{"type":"array","items":{"type":"string"},"description":"Replacement of text and pronunciations. Format: [\"燕少飞/(yan4)(shao3)(fei1)\", \"达菲/(da2)(fei1)\", \"omg/oh my god\"]"}},"required":["tone"],"description":"Custom pronunciation dictionary for handling specific words or phrases. Allows fine-tuning of how certain text should be pronounced using phonetic representations."},"timbre_weights":{"type":"array","items":{"type":"object","properties":{"voice_id":{"anyOf":[{"type":"string","enum":["Wise_Woman","Friendly_Person","Inspirational_girl","Deep_Voice_Man","Calm_Woman","Casual_Guy","Lively_Girl","Patient_Man","Young_Knight","Determined_Man","Lovely_Girl","Decent_Boy","Imposing_Manner","Elegant_Man","Abbess","Sweet_Girl_2","Exuberant_Girl"]},{"type":"string","minLength":1,"maxLength":64}],"description":"A predefined system voice for text-to-speech synthesis."},"weight":{"type":"integer","minimum":1,"maximum":100,"description":"Weight for voice mixing. Range: [1, 100]. Higher weights are sampled more heavily."}},"required":["voice_id","weight"]},"maxItems":4,"description":"Voice mixing configuration allowing combination of up to 4 different voices with specified weights. Each voice contributes to the final output based on its weight value (1-100)."},"stream":{"type":"boolean","default":false,"description":"Enable streaming mode for real-time audio generation. When enabled, audio is generated and delivered in chunks as it's processed."},"language_boost":{"type":"string","enum":["Chinese","Chinese,Yue","English","Arabic","Russian","Spanish","French","Portuguese","German","Turkish","Dutch","Ukrainian","Vietnamese","Indonesian","Japanese","Italian","Korean","Thai","Polish","Romanian","Greek","Czech","Finnish","Hindi","Bulgarian","Danish","Hebrew","Malay","Persian","Slovak","Swedish","Croatian","Filipino","Hungarian","Norwegian","Slovenian","Catalan","Nynorsk","Tamil","Afrikaans","auto"],"description":"Language recognition enhancement option."},"voice_modify":{"type":"object","properties":{"pitch":{"type":"integer","minimum":-100,"maximum":100,"description":"Pitch level (-100 to 100)"},"intensity":{"type":"integer","minimum":-100,"maximum":100,"description":"Intensity level (-100 to 100)"},"timbre":{"type":"integer","minimum":-100,"maximum":100,"description":"Timbre level (-100 to 100)"},"sound_effects":{"type":"string","enum":["spacious_echo","auditorium_echo","lofi_telephone","robotic"],"description":"Audio effects to apply to the synthesized speech. Includes options like spacious_echo, auditorium_echo, lofi_telephone, and robotic effects."}},"description":"Voice modification settings for adjusting pitch, intensity, timbre, and applying sound effects to customize the voice characteristics."},"subtitle_enable":{"type":"boolean","default":false,"description":"Enable subtitle generation service. Only available for non-streaming requests. Generates timing information for the synthesized speech."},"output_format":{"type":"string","enum":["url","hex"],"default":"hex","description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/speech-models/voice-chat/minimax/speech-2.6-hd.md # Speech 2.6 HD {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/speech-2.6-hd` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates speech from text prompts and multiple voices, optimized for high-fidelity, natural-sounding output. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AIML API Key instead of : "Authorization": "Bearer ", } payload = { "model": "minimax/speech-2.6-hd", "text": "Hi! What are you doing today?", "voice_setting": { "voice_id": "Wise_Woman" } } response = requests.post(url, headers=headers, json=payload, stream=True) dist = os.path.abspath("your_file_name.wav") with open(dist, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", dist) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript import fs from "fs"; import path from "path"; async function main() { const url = "https://api.aimlapi.com/v1/tts"; const payload = { model: "minimax/speech-2.6-hd", text: "Hi! What are you doing today?", voice_setting: { voice_id: "Wise_Woman" } }; const response = await fetch(url, { method: "POST", headers: { // Insert your AIML API Key instead of : "Authorization": `Bearer `, "Content-Type": "application/json" }, body: JSON.stringify(payload) }); // Read response as ArrayBuffer and convert to Buffer const arrayBuffer = await response.arrayBuffer(); const buffer = Buffer.from(arrayBuffer); // Save audio to file in the current working directory const dist = path.join(process.cwd(), "your_file_name.wav"); fs.writeFileSync(dist, buffer); console.log("Audio saved to:", dist); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` Audio saved to: c:\Users\user\Documents\Python Scripts\TTSes\your_file_name.wav ``` {% endcode %}
**Generation time**: \~ 5.8 s. {% embed url="" %} ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["minimax/speech-2.6-hd"]},"text":{"type":"string","minLength":1,"maxLength":5000,"description":"The text content to be converted to speech."},"voice_setting":{"type":"object","properties":{"voice_id":{"type":"string","enum":["Wise_Woman","Friendly_Person","Inspirational_girl","Deep_Voice_Man","Calm_Woman","Casual_Guy","Lively_Girl","Patient_Man","Young_Knight","Determined_Man","Lovely_Girl","Decent_Boy","Imposing_Manner","Elegant_Man","Abbess","Sweet_Girl_2","Exuberant_Girl"],"description":"A predefined system voice for text-to-speech synthesis."},"speed":{"type":"number","minimum":0.5,"maximum":2,"default":1,"description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."},"vol":{"type":"number","minimum":0.01,"maximum":10,"default":1,"description":"The volume of the generated speech. Range: (0, 10]. Larger values indicate larger volumes."},"pitch":{"type":"number","minimum":-12,"maximum":12,"default":0,"description":"The pitch of the generated speech. Range: [-12, 12]. 0 = default voice output."},"emotion":{"type":"string","enum":["happy","sad","angry","fearful","disgusted","surprised","neutral"],"description":"Emotional tone to apply to the synthesized speech. Controls the emotional expression of the generated voice output."},"text_normalization":{"type":"boolean","default":false,"description":"English text normalization support. Improves number-reading but increases latency."}},"default":{"voice_id":"Wise_Woman"},"required":["voice_id"],"description":"Voice settings overriding stored settings for the given voice. They are applied only on the given request."},"audio_setting":{"type":"object","properties":{"sample_rate":{"type":"integer","description":"Audio sample rate in Hz.","enum":[8000,16000,22050,24000,32000,44100]},"bitrate":{"type":"integer","description":"Audio bitrate in bits per second. Controls the compression level and audio quality. Higher bitrates provide better quality but larger file sizes.","enum":[32000,64000,128000,256000]},"format":{"type":"string","enum":["mp3","pcm","flac"],"default":"mp3","description":"Audio output format. MP3 provides good compression and compatibility, PCM offers uncompressed high quality, and FLAC provides lossless compression."},"channel":{"type":"integer","description":"Number of audio channels. 1 for mono (single channel), 2 for stereo (dual channel) output.","enum":[1,2]}},"description":"Audio output configuration"},"pronunciation_dict":{"type":"object","properties":{"tone":{"type":"array","items":{"type":"string"},"description":"Replacement of text and pronunciations. Format: [\"燕少飞/(yan4)(shao3)(fei1)\", \"达菲/(da2)(fei1)\", \"omg/oh my god\"]"}},"required":["tone"],"description":"Custom pronunciation dictionary for handling specific words or phrases. Allows fine-tuning of how certain text should be pronounced using phonetic representations."},"timbre_weights":{"type":"array","items":{"type":"object","properties":{"voice_id":{"type":"string","enum":["Wise_Woman","Friendly_Person","Inspirational_girl","Deep_Voice_Man","Calm_Woman","Casual_Guy","Lively_Girl","Patient_Man","Young_Knight","Determined_Man","Lovely_Girl","Decent_Boy","Imposing_Manner","Elegant_Man","Abbess","Sweet_Girl_2","Exuberant_Girl"],"description":"A predefined system voice for text-to-speech synthesis."},"weight":{"type":"integer","minimum":1,"maximum":100,"description":"Weight for voice mixing. Range: [1, 100]. Higher weights are sampled more heavily."}},"required":["voice_id","weight"]},"maxItems":4,"description":"Voice mixing configuration allowing combination of up to 4 different voices with specified weights. Each voice contributes to the final output based on its weight value (1-100)."},"stream":{"type":"boolean","default":false,"description":"Enable streaming mode for real-time audio generation. When enabled, audio is generated and delivered in chunks as it's processed."},"language_boost":{"type":"string","enum":["Chinese","Chinese,Yue","English","Arabic","Russian","Spanish","French","Portuguese","German","Turkish","Dutch","Ukrainian","Vietnamese","Indonesian","Japanese","Italian","Korean","Thai","Polish","Romanian","Greek","Czech","Finnish","Hindi","Bulgarian","Danish","Hebrew","Malay","Persian","Slovak","Swedish","Croatian","Filipino","Hungarian","Norwegian","Slovenian","Catalan","Nynorsk","Tamil","Afrikaans","auto"],"description":"Language recognition enhancement option."},"voice_modify":{"type":"object","properties":{"pitch":{"type":"integer","minimum":-100,"maximum":100,"description":"Pitch level (-100 to 100)"},"intensity":{"type":"integer","minimum":-100,"maximum":100,"description":"Intensity level (-100 to 100)"},"timbre":{"type":"integer","minimum":-100,"maximum":100,"description":"Timbre level (-100 to 100)"},"sound_effects":{"type":"string","enum":["spacious_echo","auditorium_echo","lofi_telephone","robotic"],"description":"Audio effects to apply to the synthesized speech. Includes options like spacious_echo, auditorium_echo, lofi_telephone, and robotic effects."}},"description":"Voice modification settings for adjusting pitch, intensity, timbre, and applying sound effects to customize the voice characteristics."},"subtitle_enable":{"type":"boolean","default":false,"description":"Enable subtitle generation service. Only available for non-streaming requests. Generates timing information for the synthesized speech."},"output_format":{"type":"string","enum":["url","hex"],"default":"hex","description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/speech-models/voice-chat/minimax/speech-2.6-turbo.md # Speech 2.6 Turbo {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `minimax/speech-2.6-turbo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates speech from text prompts and multiple voices, optimized for fast, low-latency synthesis. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## Quick Code Example Here is an example of generating an audio response to the user input provided in the `text` parameter. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AIML API Key instead of : "Authorization": "Bearer ", } payload = { "model": " minimax/speech-2.6-turbo", "text": "Hi! What are you doing today?", "voice_setting": { "voice_id": "Wise_Woman" } } response = requests.post(url, headers=headers, json=payload, stream=True) dist = os.path.abspath("your_file_name.wav") with open(dist, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", dist) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript import fs from "fs"; import path from "path"; async function main() { const url = "https://api.aimlapi.com/v1/tts"; const payload = { model: " minimax/speech-2.6-turbo", text: "Hi! What are you doing today?", voice_setting: { voice_id: "Wise_Woman" } }; const response = await fetch(url, { method: "POST", headers: { // Insert your AIML API Key instead of : "Authorization": `Bearer `, "Content-Type": "application/json" }, body: JSON.stringify(payload) }); // Read response as ArrayBuffer and convert to Buffer const arrayBuffer = await response.arrayBuffer(); const buffer = Buffer.from(arrayBuffer); // Save audio to file in the current working directory const dist = path.join(process.cwd(), "your_file_name.wav"); fs.writeFileSync(dist, buffer); console.log("Audio saved to:", dist); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response ``` Audio saved to: c:\Users\user\Documents\Python Scripts\TTSes\your_file_name.wav ```
**Generation time**: \~ 4.5 s. {% embed url="" %} ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["minimax/speech-2.6-turbo"]},"text":{"type":"string","minLength":1,"maxLength":5000,"description":"The text content to be converted to speech."},"voice_setting":{"type":"object","properties":{"voice_id":{"type":"string","enum":["Wise_Woman","Friendly_Person","Inspirational_girl","Deep_Voice_Man","Calm_Woman","Casual_Guy","Lively_Girl","Patient_Man","Young_Knight","Determined_Man","Lovely_Girl","Decent_Boy","Imposing_Manner","Elegant_Man","Abbess","Sweet_Girl_2","Exuberant_Girl"],"description":"A predefined system voice for text-to-speech synthesis."},"speed":{"type":"number","minimum":0.5,"maximum":2,"default":1,"description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."},"vol":{"type":"number","minimum":0.01,"maximum":10,"default":1,"description":"The volume of the generated speech. Range: (0, 10]. Larger values indicate larger volumes."},"pitch":{"type":"number","minimum":-12,"maximum":12,"default":0,"description":"The pitch of the generated speech. Range: [-12, 12]. 0 = default voice output."},"emotion":{"type":"string","enum":["happy","sad","angry","fearful","disgusted","surprised","neutral"],"description":"Emotional tone to apply to the synthesized speech. Controls the emotional expression of the generated voice output."},"text_normalization":{"type":"boolean","default":false,"description":"English text normalization support. Improves number-reading but increases latency."}},"default":{"voice_id":"Wise_Woman"},"required":["voice_id"],"description":"Voice settings overriding stored settings for the given voice. They are applied only on the given request."},"audio_setting":{"type":"object","properties":{"sample_rate":{"type":"integer","description":"Audio sample rate in Hz.","enum":[8000,16000,22050,24000,32000,44100]},"bitrate":{"type":"integer","description":"Audio bitrate in bits per second. Controls the compression level and audio quality. Higher bitrates provide better quality but larger file sizes.","enum":[32000,64000,128000,256000]},"format":{"type":"string","enum":["mp3","pcm","flac"],"default":"mp3","description":"Audio output format. MP3 provides good compression and compatibility, PCM offers uncompressed high quality, and FLAC provides lossless compression."},"channel":{"type":"integer","description":"Number of audio channels. 1 for mono (single channel), 2 for stereo (dual channel) output.","enum":[1,2]}},"description":"Audio output configuration"},"pronunciation_dict":{"type":"object","properties":{"tone":{"type":"array","items":{"type":"string"},"description":"Replacement of text and pronunciations. Format: [\"燕少飞/(yan4)(shao3)(fei1)\", \"达菲/(da2)(fei1)\", \"omg/oh my god\"]"}},"required":["tone"],"description":"Custom pronunciation dictionary for handling specific words or phrases. Allows fine-tuning of how certain text should be pronounced using phonetic representations."},"timbre_weights":{"type":"array","items":{"type":"object","properties":{"voice_id":{"type":"string","enum":["Wise_Woman","Friendly_Person","Inspirational_girl","Deep_Voice_Man","Calm_Woman","Casual_Guy","Lively_Girl","Patient_Man","Young_Knight","Determined_Man","Lovely_Girl","Decent_Boy","Imposing_Manner","Elegant_Man","Abbess","Sweet_Girl_2","Exuberant_Girl"],"description":"A predefined system voice for text-to-speech synthesis."},"weight":{"type":"integer","minimum":1,"maximum":100,"description":"Weight for voice mixing. Range: [1, 100]. Higher weights are sampled more heavily."}},"required":["voice_id","weight"]},"maxItems":4,"description":"Voice mixing configuration allowing combination of up to 4 different voices with specified weights. Each voice contributes to the final output based on its weight value (1-100)."},"stream":{"type":"boolean","default":false,"description":"Enable streaming mode for real-time audio generation. When enabled, audio is generated and delivered in chunks as it's processed."},"language_boost":{"type":"string","enum":["Chinese","Chinese,Yue","English","Arabic","Russian","Spanish","French","Portuguese","German","Turkish","Dutch","Ukrainian","Vietnamese","Indonesian","Japanese","Italian","Korean","Thai","Polish","Romanian","Greek","Czech","Finnish","Hindi","Bulgarian","Danish","Hebrew","Malay","Persian","Slovak","Swedish","Croatian","Filipino","Hungarian","Norwegian","Slovenian","Catalan","Nynorsk","Tamil","Afrikaans","auto"],"description":"Language recognition enhancement option."},"voice_modify":{"type":"object","properties":{"pitch":{"type":"integer","minimum":-100,"maximum":100,"description":"Pitch level (-100 to 100)"},"intensity":{"type":"integer","minimum":-100,"maximum":100,"description":"Intensity level (-100 to 100)"},"timbre":{"type":"integer","minimum":-100,"maximum":100,"description":"Timbre level (-100 to 100)"},"sound_effects":{"type":"string","enum":["spacious_echo","auditorium_echo","lofi_telephone","robotic"],"description":"Audio effects to apply to the synthesized speech. Includes options like spacious_echo, auditorium_echo, lofi_telephone, and robotic effects."}},"description":"Voice modification settings for adjusting pitch, intensity, timbre, and applying sound effects to customize the voice characteristics."},"subtitle_enable":{"type":"boolean","default":false,"description":"Enable subtitle generation service. Only available for non-streaming requests. Generates timing information for the synthesized speech."},"output_format":{"type":"string","enum":["url","hex"],"default":"hex","description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/speech-models.md # Voice/Speech Models With our API you are able to synthesize speech and transform speech into text. We support multiple voice/speech models. You can find the [complete list](#all-available-voice-speech-models) along with API reference links at the end of the page. {% content-ref url="speech-models/speech-to-text" %} [speech-to-text](https://docs.aimlapi.com/api-references/speech-models/speech-to-text) {% endcontent-ref %} {% content-ref url="speech-models/text-to-speech" %} [text-to-speech](https://docs.aimlapi.com/api-references/speech-models/text-to-speech) {% endcontent-ref %} {% content-ref url="speech-models/voice-chat" %} [voice-chat](https://docs.aimlapi.com/api-references/speech-models/voice-chat) {% endcontent-ref %} ## All Available Voice/Speech Models ### Speech-to-Text
Model ID + API Reference linkDeveloperContextModel Card
aai/slam-1Assembly AISlam 1
aai/universalAssembly AIUniversal
#g1_nova-2-automotiveDeepgramDeepgram Nova-2
#g1_nova-2-conversationalaiDeepgramDeepgram Nova-2
#g1_nova-2-drivethruDeepgramDeepgram Nova-2
#g1_nova-2-financeDeepgramDeepgram Nova-2
#g1_nova-2-generalDeepgramDeepgram Nova-2
#g1_nova-2-medicalDeepgramDeepgram Nova-2
#g1_nova-2-meetingDeepgramDeepgram Nova-2
#g1_nova-2-phonecallDeepgramDeepgram Nova-2
#g1_nova-2-videoDeepgramDeepgram Nova-2
#g1_nova-2-voicemailDeepgramDeepgram Nova-2
#g1_whisper-tinyOpenAI-
#g1_whisper-smallOpenAI-
#g1_whisper-baseOpenAI-
#g1_whisper-mediumOpenAI-
#g1_whisper-largeOpenAIWhisper
openai/gpt-4o-transcribeOpenAIGPT-4o Transcribe
openai/gpt-4o-mini-transcribeOpenAIGPT-4o Mini Transcribe
### Text-to-Speech
Model IDDeveloperContextModel Card
alibaba/qwen3-tts-flashAlibaba CloudQwen3-TTS-Flash
#g1_aura-angus-enDeepgramAura
#g1_aura-arcas-enDeepgramAura
#g1_aura-asteria-enDeepgramAura
#g1_aura-athena-enDeepgramAura
#g1_aura-helios-enDeepgramAura
#g1_aura-hera-enDeepgramAura
#g1_aura-luna-enDeepgramAura
#g1_aura-orion-enDeepgramAura
#g1_aura-orpheus-enDeepgramAura
#g1_aura-perseus-enDeepgramAura
#g1_aura-stella-enDeepgramAura
#g1_aura-zeus-enDeepgramAura
#g1_aura-2-amalthea-enDeepgramAura 2
#g1_aura-2-andromeda-enDeepgramAura 2
#g1_aura-2-apollo-enDeepgramAura 2
#g1_aura-2-arcas-enDeepgramAura 2
#g1_aura-2-aries-enDeepgramAura 2
#g1_aura-2-asteria-enDeepgramAura 2
#g1_aura-2-athena-enDeepgramAura 2
#g1_aura-2-atlas-enDeepgramAura 2
#g1_aura-2-aurora-enDeepgramAura 2
#g1_aura-2-callista-enDeepgramAura 2
#g1_aura-2-cora-enDeepgramAura 2
#g1_aura-2-cordelia-enDeepgramAura 2
#g1_aura-2-delia-enDeepgramAura 2
#g1_aura-2-draco-enDeepgramAura 2
#g1_aura-2-electra-enDeepgramAura 2
#g1_aura-2-harmonia-enDeepgramAura 2
#g1_aura-2-helena-enDeepgramAura 2
#g1_aura-2-hera-enDeepgramAura 2
#g1_aura-2-hermes-enDeepgramAura 2
#g1_aura-2-hyperion-enDeepgramAura 2
#g1_aura-2-iris-enDeepgramAura 2
#g1_aura-2-janus-enDeepgramAura 2
#g1_aura-2-juno-enDeepgramAura 2
#g1_aura-2-jupiter-enDeepgramAura 2
#g1_aura-2-luna-enDeepgramAura 2
#g1_aura-2-mars-enDeepgramAura 2
#g1_aura-2-minerva-enDeepgramAura 2
#g1_aura-2-neptune-enDeepgramAura 2
#g1_aura-2-odysseus-enDeepgramAura 2
#g1_aura-2-ophelia-enDeepgramAura 2
#g1_aura-2-orion-enDeepgramAura 2
#g1_aura-2-orpheus-enDeepgramAura 2
#g1_aura-2-pandora-enDeepgramAura 2
#g1_aura-2-phoebe-enDeepgramAura 2
#g1_aura-2-pluto-enDeepgramAura 2
#g1_aura-2-saturn-enDeepgramAura 2
#g1_aura-2-selene-enDeepgramAura 2
#g1_aura-2-thalia-enDeepgramAura 2
#g1_aura-2-theia-enDeepgramAura 2
#g1_aura-2-vesta-enDeepgramAura 2
#g1_aura-2-zeus-enDeepgramAura 2
#g1_aura-2-celeste-esDeepgramAura 2
#g1_aura-2-estrella-esDeepgramAura 2
#g1_aura-2-nestor-esDeepgramAura 2
elevenlabs/eleven_multilingual_v2ElevenLabsElevenLabs Multilingual v2
elevenlabs/eleven_turbo_v2_5ElevenLabsElevenLabs Turbo v2.5
hume/octave-2Hume AIOctave 2
inworld/tts-1InworldInworld TTS-1
inworld/tts-1-maxInworldInworld TTS-1-Max
microsoft/vibevoice-1.5bMicrosoftVibeVoice 1.5B
microsoft/vibevoice-7bMicrosoftVibeVoice 7B
openai/tts-1OpenAITTS-1
openai/tts-1-hdOpenAITTS-1 HD
openai/gpt-4o-mini-ttsOpenAIGPT-4o-mini-TTS
### Voice Chat
Model IDDeveloperContextModel Card
elevenlabs/v3_alphaElevenLabsEleven v3 Alpha
minimax/speech-2.5-turbo-previewMiniMaxMiniMax Speech 2.5 Turbo
minimax/speech-2.5-hd-previewMiniMaxMiniMax Speech 2.5 HD
minimax/speech-2.6-turboMiniMaxMiniMax Speech 2.6 Turbo
minimax/speech-2.6-hdMiniMaxMiniMax Speech 2.6 HD
--- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text.md # Speech-to-Text ## Overview Speech-to-text models convert spoken language into written text, enabling voice-based interactions across various applications. These models leverage deep learning techniques, such as recurrent neural networks (RNNs) and transformers, to process audio signals and transcribe them with high accuracy. They are commonly used in voice assistants, transcription services, and accessibility tools, supporting multiple languages and adapting to different accents and speech patterns. {% hint style="warning" %} Generated audio transcriptions are stored on the server for 1 hour from the time of creation. {% endhint %} ## Quick Code Examples Let's use the `#g1_whisper-large` model to transcribe the following audio fragment: {% embed url="" %} ### Example #1: Processing a Speech Audio File via URL
import time
import requests

base_url = "https://api.aimlapi.com/v1"
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
api_key = "<YOUR_AIMLAPI_KEY>"

# Creating and sending a speech-to-text conversion task to the server
def create_stt():
    url = f"{base_url}/stt/create"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }

    data = {
        "model": "#g1_whisper-large",
        "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3"
    }
 
    response = requests.post(url, json=data, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        print(response_data)
        return response_data

# Requesting the result of the task from the server using the generation_id
def get_stt(gen_id):
    url = f"{base_url}/stt/{gen_id}"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }
    response = requests.get(url, headers=headers)
    return response.json()
    
# First, start the generation, then repeatedly request the result from the server every 10 seconds.
def main():
    stt_response = create_stt()
    gen_id = stt_response.get("generation_id")


    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = get_stt(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")

            if status == "waiting" or status == "active":
                ("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"])
                return response_data
   
        print("Timeout reached. Stopping.")
        return None     


if __name__ == "__main__":
    main()
Response {% code overflow="wrap" %} ``` {'generation_id': 'e3d46bba-7562-44a9-b440-504d940342a3'} Processing complete: he doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he personified from this stage to you be fire ``` {% endcode %}
### Example #2: Processing a Speech Audio File via File Path {% code overflow="wrap" %} ```python import time import requests base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "#g1_whisper-large", } with open("stt-sample.mp3", "rb") as file: files = {"audio": ("sample.mp3", file, "audio/mpeg")} response = requests.post(url, data=data, headers=headers, files=files) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"]) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ```lisp {'generation_id': 'dd412e9d-044c-43ae-b97b-e920755074d5'} Processing complete: he doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he personified from this stage to you be fire ``` {% endcode %}
## All Available Speech-to-Text Models
Model ID + API Reference linkDeveloperContextModel Card
aai/slam-1Assembly AISlam 1
aai/universalAssembly AIUniversal
#g1_nova-2-automotiveDeepgramDeepgram Nova-2
#g1_nova-2-conversationalaiDeepgramDeepgram Nova-2
#g1_nova-2-drivethruDeepgramDeepgram Nova-2
#g1_nova-2-financeDeepgramDeepgram Nova-2
#g1_nova-2-generalDeepgramDeepgram Nova-2
#g1_nova-2-medicalDeepgramDeepgram Nova-2
#g1_nova-2-meetingDeepgramDeepgram Nova-2
#g1_nova-2-phonecallDeepgramDeepgram Nova-2
#g1_nova-2-videoDeepgramDeepgram Nova-2
#g1_nova-2-voicemailDeepgramDeepgram Nova-2
#g1_whisper-tinyOpenAI-
#g1_whisper-smallOpenAI-
#g1_whisper-baseOpenAI-
#g1_whisper-mediumOpenAI-
#g1_whisper-largeOpenAIWhisper
openai/gpt-4o-transcribeOpenAIGPT-4o Transcribe
openai/gpt-4o-mini-transcribeOpenAIGPT-4o Mini Transcribe
--- # Source: https://docs.aimlapi.com/api-references/3d-generating-models/stability-ai.md # Source: https://docs.aimlapi.com/api-references/music-models/stability-ai.md # Source: https://docs.aimlapi.com/api-references/image-models/stability-ai.md # Stability AI - [Stable Diffusion v3 Medium](/api-references/image-models/stability-ai/stable-diffusion-v3-medium.md) - [Stable Diffusion v3.5 Large](/api-references/image-models/stability-ai/stable-diffusion-v35-large.md) --- # Source: https://docs.aimlapi.com/api-references/music-models/stability-ai/stable-audio.md # stable-audio {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `stable-audio` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced audio generation model designed to create high-quality audio tracks from textual prompts. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas ## POST /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/generate/audio":{"post":{"operationId":"_v2_generate_audio","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["stable-audio"]},"prompt":{"type":"string","description":"The prompt to generate audio."},"seconds_start":{"type":"integer","maximum":47,"minimum":1,"description":"The start point of the audio clip to generate."},"seconds_total":{"type":"integer","maximum":47,"minimum":1,"default":30,"description":"The duration of the audio clip to generate."},"steps":{"type":"integer","minimum":1,"maximum":1000,"default":100,"description":"The number of steps to denoise the audio."}},"required":["model","prompt"],"title":"stable-audio"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated music sample from the server After sending a request for music generation, this task is added to the queue. This endpoint lets you check the status of a audio generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `complete`, the response will include the final result — with the generated audio URL and additional metadata. ## GET /v2/generate/audio > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/generate/audio":{"get":{"operationId":"_v2_generate_audio","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated audio."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"audio_file":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Audio From the Server The code below creates a audio generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import time import requests # Insert your AI/ML API key instead of : aimlapi_key = '' # Creating and sending an audio generation task to the server (returns a generation ID) def generate_audio(): url = "https://api.aimlapi.com/v2/generate/audio" payload = { "model": "elevenlabs/eleven_music", "prompt": "lo-fi pop hip-hop ambient music, slow intro: 10 s, then faster and with loud bass: 10 s", "seconds_total": 20, } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print("Generation: ", response_data) return response_data # Requesting the result of the generation task from the server using the generation_id: def retrieve_audio(gen_id): url = "https://api.aimlapi.com/v2/generate/audio" params = { "generation_id": gen_id, } headers = {"Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json"} response = requests.get(url, params=params, headers=headers) return response.json() # This is the main function of the program. From here, we sequentially call the audio generation and then repeatedly request the result from the server every 10 seconds: def main(): generation_response = generate_audio() gen_id = generation_response.get("id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = retrieve_audio(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 10 seconds.") time.sleep(10) else: print("Generation complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a audio generation task to the server function generateAudio(callback) { const data = JSON.stringify({ model: "elevenlabs/eleven_music", prompt: "lo-fi pop hip-hop ambient music, slow intro: 10 s, then faster and with loud bass: 10 s", seconds_total: 20, }); const url = new URL(`${baseUrl}/generate/audio`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getAudio(genId, callback) { const url = new URL(`${baseUrl}/generate/audio`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates sound generation and checks the status every 10 seconds until completion or timeout function main() { generateAudio((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getAudio(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 10 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: ed58c4e0-2ed6-429f-91a1-b2c13a89ff46:stable-audio Status: queued. Checking again in 10 seconds. Status: generating. Checking again in 10 seconds. Status: generating. Checking again in 10 seconds. Status: generating. Checking again in 10 seconds. Processing complete: { id: 'ed58c4e0-2ed6-429f-91a1-b2c13a89ff46:stable-audio', status: 'completed', audio_file: { url: 'https://cdn.aimlapi.com/flamingo/files/b/0a88448e/wxI96EIL4Noe21Zt3XsFc_tmpdwdfh537.wav', content_type: 'application/octet-stream', file_name: 'tmpdwdfh537.wav', file_size: 5292078 } } ``` {% endcode %}
Listen to the track we generated: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/image-models/stability-ai/stable-diffusion-v3-medium.md # Stable Diffusion v3 Medium {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `stable-diffusion-v3-medium` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An advanced text-to-image generation model that utilizes a Multimodal Diffusion Transformer (MMDiT) architecture to produce high-quality images from textual descriptions. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["stable-diffusion-v3-medium"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":64,"maximum":1536,"default":1024},"height":{"type":"integer","minimum":64,"maximum":1536,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"square_hd"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated image."},"prompt_expansion":{"type":"boolean","description":"If set to True, prompt will be upsampled with more details."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":50,"description":"The number of inference steps to perform."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."}},"required":["model","prompt"],"title":"stable-diffusion-v3-medium"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "stable-diffusion-v3-medium", "image_size": "landscape_16_9" } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'stable-diffusion-v3-medium', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.', image_size: 'landscape_16_9' }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/squirrel/files/monkey/pAs554_StzWBkrLMgTH5a.png', width: 1024, height: 576, content_type: 'image/jpeg' } ], timings: { inference: 1.1477893170085736 }, seed: 3544609846964942300, has_nsfw_concepts: [ false ], prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.', num_images: 1 } ``` {% endcode %}
We obtained the following 1024x576 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/image-models/stability-ai/stable-diffusion-v35-large.md # Stable Diffusion v3.5 Large {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `stable-diffusion-v35-large` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art text-to-image generative model designed to create high-resolution images based on textual prompts. It excels in producing diverse and high-quality outputs, making it suitable for professional applications. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["stable-diffusion-v35-large"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":64,"maximum":1536,"default":1024},"height":{"type":"integer","minimum":64,"maximum":1536,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"square_hd"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated image."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":50,"description":"The number of inference steps to perform."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"jpeg","description":"The format of the generated image."}},"required":["model","prompt"],"title":"stable-diffusion-v35-large"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "model": "stable-diffusion-v35-large", "image_size": "landscape_16_9" "num_inference_steps": 40, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'stable-diffusion-v35-large', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Realistic photo.', image_size: 'landscape_16_9', num_inference_steps: 40, }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { images: [ { url: 'https://cdn.aimlapi.com/eagle/files/elephant/4vP0cAmlTNsadiYaMFE30.jpeg', width: 1024, height: 576, content_type: 'image/jpeg' } ], timings: { inference: 4.855678029009141 }, seed: 6199662706750842000, has_nsfw_concepts: [ false ], prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.' } ``` {% endcode %}
We obtained the following 1024x576 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

Extra pictures

"A highly detailed T-Rex relaxing on a sunny beach, lying on a wooden sun lounger and wearing stylish sunglasses. Its skin is covered in realistic, finely textured scales with natural color variations — rough and weathered like that of large reptiles. Sunlight reflects subtly off the individual scales. The background includes palm trees, gentle waves, and soft sand partially covering the T-Rex's feet. The scene is rendered with cinematic lighting and a natural color palette."
"num_inference_steps": 40

"Racoon eating ice-cream"

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses. Vector illustration style. Top-down view, with visible palm trees, seagulls, and a strip of water."
"num_inference_steps": 40

--- # Source: https://docs.aimlapi.com/capabilities/streaming-mode.md # Streaming Mode Streaming mode allows the text chat model to deliver responses as they are generated, rather than waiting for the entire response to be completed. This provides faster feedback and a more fluid interaction. The `stream` parameter is used to enable/disable this mode. You can also use this functionality when programming [Assistants](https://docs.aimlapi.com/solutions/openai/assistants), though tracking and handling all necessary events is the responsibility of the developer. An example can be found in [one of our use cases related to Assistant creation](https://docs.aimlapi.com/use-cases/create-an-assistant-to-discuss-a-specific-document#id-4.-add-streaming-mode). All our available [text models](https://docs.aimlapi.com/api-references/text-models-llm) support this feature. --- # Source: https://docs.aimlapi.com/use-cases/summarize-websites-with-ai-powered-chrome-extension.md # Summarize Websites with AI-Powered Chrome Extension ## Intro In this tutorial, we’ll show how to build a Chrome extension from scratch using an AI/ML API. You’ll set up the development environment, install the necessary tools, and implement key components such as: * `manifest.json`: Contains essential metadata about your extension. * `script.js`: Defines the extension's functionality and behavior. * `style.css`: Adds styling for a polished look. * `popup.html`: Provides the user interface for your extension. * `popup.js`: Handles interactions and functionality within the popup interface. Throughout the tutorial, we’ll highlight best practices for building Chrome extensions and managing user interactions effectively. By the end, you’ll have a strong foundation for creating Chrome extensions and the skills to develop your own AI-powered solutions. ## Getting Started with Chrome Extensions Building a Chrome extension requires understanding its structure, permissions, and interaction with web pages. In this section, we’ll set up our development environment and create the foundational files necessary to start building an extension. ### Setting Up Your Development Environment Before diving into coding, ensure you have the following prerequisites: * *Chrome Browser*: This is where we’ll load, test, and debug our extension. * *Text Editor or IDE*: Tools like Visual Studio Code, Sublime Text, or Atom are excellent choices for editing code. * *Basic Knowledge of HTML, CSS, and JavaScript*: Familiarity with these core web technologies is essential for building Chrome extensions. ### Creating the Project Structure A minimal Chrome extension requires at least three key files: 1. *manifest.json*: Contains metadata and configuration for the extension, such as its name, version, permissions, and the scripts it uses. 2. *script.js*: Includes the JavaScript code that defines the extension's functionality. 3. *style.css*: Provides styling for any user interface elements in the extension. As we progress through this tutorial, we will also create: * *popup.html*: A file that defines the structure of the extension's popup interface. * *popup.js*: A script to manage the interactivity and logic of the popup. ### Setting Up the Project 1. Create a new directory for your extension project. 2. Inside this directory, create the following three files:some text * *manifest.json* * *script.js* * *style.css*‍ Once your project structure is in place, you’ll be ready to begin writing the code for your extension. ### **Understanding manifest.json** The *manifest.json* file is the core of your Chrome extension. It provides the browser with essential information about your extension, including its purpose, functionality, and the permissions it requires. Let’s explore how to configure this file effectively. #### **Key Elements in *****manifest.json***** for the "Summarize" Extension** The following configuration demonstrates the essential and additional fields in the *manifest.json* file for the "Summarize" Chrome extension. #### **Essential Fields** * *manifest\_version*: Specifies the version of the manifest file format. This extension uses version 3, the latest standard for Chrome extensions. * *name*: The name of the extension, "Summarize", indicates its purpose. * *version*: The initial release version is "1.0", following semantic versioning. #### **Additional Metadata and Permissions** * *description*: A concise summary of the extension’s functionality: "Write a summary of a website or text". * *host\_permissions*: The wildcard \*://\*.aimlapi.com/\* grants access to all subdomains of aimlapi.com, enabling integration with the AI/ML API. * *permissions*: Includes "activeTab", allowing the extension to interact with the content of the current active browser tab. * *content\_scripts*: Defines scripts and styles to be injected into web pages:some text * *matches*: Targets all URLs with *\*. * *js*: Loads scripts.js, which defines the extension’s behavior on web pages. * *css*: Loads styles.css for any necessary styling. * *icons*: Specifies paths to the extension’s icons in three sizes:some text * 16x16: Small icon for toolbars and buttons. * 48x48: Default-sized icon. * 128x128: Large icon for extension details in the Chrome Web Store. ### Generating an Icon You can create an icon for your Chrome extension using tools like ChatGPT or AI/ML platforms. Here’s the prompt I used to generate an icon: ```markdown Generate a black-and-white icon for my 'Summarize' Chrome extension. This extension enables users to highlight specific text on a website or summarize the entire page. It’s an AI-powered tool. The icon should feature a solid white background. ``` Download the icon and rename it appropriately. You can use a single icon for different sizes. ### Developing scripts.js The *scripts.js* file contains the logic that defines your extension's behavior. In this section, we’ll outline the key functionalities your script needs to implement. #### **Variables and Initialization** Begin by setting up the essential variables: * `AIML_API_KEY`: Obtain an API key from the AI/ML API platform to authenticate your requests. * `MODEL`: Define which AI/ML model you want to use for processing the text. * `overlay`: A variable to manage the overlay that will display the summary or other relevant information on the page. ```javascript const getSummary = async text => { try { const headers = { Authorization: `Bearer ${AIML_API_KEY}`, 'Content-Type': 'application/json', }; const jsonData = { model: MODEL, messages: [ { role: 'assistant', content: `You are an AI assistant who provides summaries for long texts. You are using HTML tags to format your response.`, }, { role: 'user', content: `Please summarize the following text: ${text}`, }, ], }; const response = await fetch( 'https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: headers, body: JSON.stringify(jsonData), } ); if (!response.ok) { throw new Error('API request failed'); } const data = await response.json(); return data.choices[0].message.content; } catch (error) { console.log(`Error: ${error}`); } }; ``` In the message, we specified that the model should send HTML in the response. This is crucial for preserving the original markup and formatting, ensuring that the content is displayed correctly on the web page. #### **Creating the Summary Overlay** Let’s create a function to generate the overlay. This function will add the overlay and a button to the DOM. Additionally, we’ll attach a click event listener to the button, which will trigger the `getSummary` function and display the response to the user. ```javascript const createSummaryOverlay = text => { overlay = document.createElement('div'); overlay.id = 'summary-overlay'; overlay.style.display = 'none'; const summaryButton = document.createElement('button'); summaryButton.id = 'summary-button'; summaryButton.textContent = 'Summarize'; overlay.appendChild(summaryButton); document.body.appendChild(overlay); summaryButton.addEventListener('click', async () => { summaryButton.textContent = 'Summarizing...'; summaryButton.disabled = true; try { const summary = await getSummary(text); summaryButton.textContent = 'Summary'; const summaryContainer = document.createElement('div'); summaryContainer.innerHTML = summary; overlay.appendChild(summaryContainer); } catch (error) { console.log(`Error: ${error}`); } }); }; ``` The next function is `showOverlay`, which is responsible for displaying the overlay in the appropriate location on the page. ```javascript const showOverlay = () => { const selection = window.getSelection(); const range = selection.getRangeAt(0); const rect = range.getBoundingClientRect(); overlay.style.display = 'flex'; overlay.style.top = `${window.scrollY + rect.top - 50}px`; overlay.style.left = `${rect.left}px`; }; document.addEventListener('mouseup', event => { if (event.target.closest('#summary-overlay')) return; const selectedText = window.getSelection().toString().trim(); if (selectedText.length > 200 && selectedText.length < 7000) { if (!overlay) createSummaryOverlay(); showOverlay(); } else if (overlay) { document.body.removeChild(overlay); overlay = null; } }); ``` #### **Full code:**
Expand ```javascript const AIML_API_KEY = 'Your API KEY'; // Replace with your AIML_API_KEY const MODEL = 'Your model'; let overlay = null; const getSummary = async text => { try { const headers = { Authorization: `Bearer ${AIML_API_KEY}`, 'Content-Type': 'application/json', }; const jsonData = { model: MODEL, messages: [ { role: 'assistant', content: `You are an AI assistant who provides summaries for long texts. You are using HTML tags to format your response.`, }, { role: 'user', content: `Please summarize the following text: ${text}`, }, ], }; const response = await fetch( 'https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: headers, body: JSON.stringify(jsonData), } ); if (!response.ok) { throw new Error('API request failed'); } const data = await response.json(); return data.choices[0].message.content; } catch (error) { console.log(`Error: ${error}`); } }; const createSummaryOverlay = text => { overlay = document.createElement('div'); overlay.id = 'summary-overlay'; overlay.style.display = 'none'; const summaryButton = document.createElement('button'); summaryButton.id = 'summary-button'; summaryButton.textContent = 'Summarize'; overlay.appendChild(summaryButton); document.body.appendChild(overlay); summaryButton.addEventListener('click', async () => { summaryButton.textContent = 'Summarizing...'; summaryButton.disabled = true; try { const summary = await getSummary(text); summaryButton.textContent = 'Summary'; const summaryContainer = document.createElement('div'); summaryContainer.innerHTML = summary; overlay.appendChild(summaryContainer); } catch (error) { console.log(`Error: ${error}`); } }); }; const showOverlay = () => { const selection = window.getSelection(); const range = selection.getRangeAt(0); const rect = range.getBoundingClientRect(); overlay.style.display = 'flex'; overlay.style.top = `${window.scrollY + rect.top - 50}px`; overlay.style.left = `${rect.left}px`; }; document.addEventListener('mouseup', event => { if (event.target.closest('#summary-overlay')) return; const selectedText = window.getSelection().toString().trim(); if (selectedText.length > 200 && selectedText.length < 7000) { if (!overlay) createSummaryOverlay(); showOverlay(); } else if (overlay) { document.body.removeChild(overlay); overlay = null; } }); ```
### Styling with styles.css To ensure a smooth and intuitive user experience, your extension should feature a clean and user-friendly interface. #### **Styling the Overlay and Button** Define styles for the following elements: * **Overlay Positioning**: use absolute positioning to place the overlay near the selected text. ```css #summary-overlay { max-width: 500px; max-height: 500px; overflow-y: scroll; cursor: pointer; position: absolute; border-radius: 4px; background-color: #333; display: flex; flex-direction: column; justify-content: center; align-items: center; padding: 10px; box-sizing: border-box; z-index: 10000; color: #fff; } ``` * **Button Appearance**: style the Summarize button to match the overlay design and ensure it is easily clickable. ```css #summary-button { background: transparent; border: none; font-size: 14px; cursor: pointer; z-index: 10001; } ``` * **Hover Effects**: add hover effects to the button to improve user interaction and provide visual feedback. ```css #summary-button:hover { color: #000; padding: 2px; border-radius: 4px; } ``` * **Disabled State**: clearly indicate when the button is disabled, helping users understand its functionality. ```css #summary-button:disabled { color: #aaa; cursor: default; } ``` We’ve completed the first part - now you can select text and receive a summary of it. The next step is to configure the interface to summarize the entire website. To enable summarization of the entire website, we need to make the following additions: ### Add New Files * *popup.html*: This file will define the user interface for the popup, allowing users to initiate a site-wide summary. * *popup.js:* This script will handle the logic for the popup, including interacting with the main script and triggering the summarization process. #### **Update manifest.json** Add the following lines to configure the extension’s popup: ```javascript "action": { "default_title": "Summarize site", "default_popup": "popup.html" } ``` * *default\_title*: Specifies the title displayed when users hover over the extension icon in the browser toolbar. * *default\_popup*: Points to the popup.html file, which defines the popup’s user interface. With these updates, users will be able to interact with the extension via a popup to generate a summary for the entire website. #### **Full *****manifest.json***** code** ```javascript { "manifest_version": 3, "name": "Summarize", "version": "1.0", "description": "Write a summary of a website or text", "host_permissions": ["*://*.aimlapi.com/*"], "permissions": ["activeTab", "scripting"], "action": { "default_title": "Summarize site", "default_popup": "popup.html" }, "content_scripts": [ { "matches": [""], "js": ["scripts.js"], "css": ["styles.css"] } ], "icons": { "16": "icons/icon.png", "48": "icons/icon.png", "128": "icons/icon.png" } } ``` ### Adding Code to popup.html Open the *popup.html* file and insert the following code. This code defines the structure of the popup window, includes inline styles for simplicity, and connects the *popup.js* script. ```javascript Summarizer

Summarize site

``` ### Adding Code to *popup.js* The final step is to implement the functionality for the *popup.js* file. The following code adds an event listener to a button in the popup, enabling it to trigger the summarization process on the active browser tab: ```javascript document .getElementById('summarize-btn') .addEventListener('click', async function clickSummary() { const bntSummary = document.getElementById('summarize-btn'); // Update button text to indicate the process has started bntSummary.innerText = 'Summarizing...'; // Prevent multiple clicks during execution bntSummary.removeEventListener('click', clickSummary); // Change button style for feedback bntSummary.style.backgroundColor = '#0053ac'; // Identify the active tab in the current browser window const [tab] = await chrome.tabs.query({ active: true, currentWindow: true, }); // Execute the summarizeText function //in the context of the active tab chrome.scripting.executeScript({ target: { tabId: tab.id }, // Function to run in the tab's environment func: summarizeText, }); }); ``` #### **Explanation of the Code** 1. Event Listener Setup: * The code adds a click event listener to a button with the ID *summarize-btn.* When clicked, the *clickSummary* function is executed. 2. Button Feedback: * The button’s text is changed to 'Summarizing...' to provide feedback that the summarization process has started. * The event listener is removed immediately after the first click to prevent multiple triggers while the function executes. * The button’s background color is updated to indicate a change in state visually. 3. Tab Query: * *chrome.tabs.query* is used to find the currently active tab in the browser window. This ensures that the summarization script runs only on the visible tab. 4. Executing the Summarization Script: * *chrome.scripting.executeScript* injects and runs the *summarizeText* function (we will add it later) in the context of the active tab. ### Adding Communication Between Content Scripts and the Popup The next step is to handle communication between the *content\_script* (executing in the context of the webpage) and the popup script (*popup.js*). This ensures that the summarization results are displayed in the popup after being processed. The following code snippet demonstrates how to listen for messages from the *content\_script* and dynamically update the popup UI to show the summarization result: ```javascript chrome.runtime.onMessage.addListener( (request, sender, sendResponse) => { // Remove the summarize button once // the summary is available document.getElementById("summarize-btn").remove(); // Create a new container to display the summary const summaryContainer = document.createElement("div"); // Set the received summary as the container's content summaryContainer.innerHTML = request.text; // Add the container to the popup's body document.body.appendChild(summaryContainer); } ); ``` #### **Explanation of the Code** 1. Listening for Messages: * The *chrome.runtime.onMessage.addListener* method listens for messages sent by other parts of the extension, such as the *content\_script*. * The *request* parameter contains the data sent by the sender. In this case, it’s expected to have a *text* property with the summarization result. 2. Removing the Button: * Once the summarization result is received, the *summarize-btn* button is removed from the popup to avoid redundant actions. 3. Displaying the Summary: * A new *\
* element is created to display the summarization result dynamically. * The *innerHTML* property of the container is set to the text from the *request*, ensuring the summary is shown in the popup. * Finally, the new container is appended to the popup's *\*. ### Adding the Summarize Functionality To complete the *summarization* process, the summarizeText function needs to be added. This function extracts the text content of the webpage and communicates with other parts of the extension to send the summarized result. Here’s the code for the function: ```javascript const summarizeText = async () => { // Extracts all visible text from the webpage const bodyText = document.body.innerText; // Calls an external or predefined function to generate a summary const summary = await getSummary(bodyText); // Sends the summary to other parts of the extension chrome.runtime.sendMessage({ text: summary }); }; ``` #### **What This Code Does** 1. Extracting Text Content: * The function uses *document.body.innerText* to retrieve all visible text content on the webpage.This includes text from elements like paragraphs, headings, and lists but excludes hidden or non-text elements. 2. Generating a Summary: * The getSummary function (written in *script.js*) is called with the extracted text (*bodyText*) as an argument. 3. Sending the Summary: * Once the summary is generated, it is sent to other parts of the extension using *chrome.runtime.sendMessage*. This allows the extension’s popup or background script to receive the summary and display it to the user. ## **Results** **The extension is complete!** The final steps are: to integrate the API key, choose the desired model, and add the extension to Chrome for use. Feel free to use the following links: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet). [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. [**Browse our Models**](https://docs.aimlapi.com/quickstart/readme#models-by-task) --- # Source: https://docs.aimlapi.com/quickstart/supported-sdks.md # Supported SDKs This page describes the SDK[^1]s that can be used to call our API. {% hint style="success" %} Also take a look at the [**INTEGRATIONS**](https://docs.aimlapi.com/integrations/our-integration-list) section — it covers many third-party services and libraries (workflow platforms, coding assistants, etc.) that allow you to integrate our models in various ways. {% endhint %} ## OpenAI In the [setting up article](https://docs.aimlapi.com/quickstart/setting-up), we showed an example of how to use the OpenAI SDK with the AI/ML API. We configured the environment from the very beginning and executed our request to the AI/ML API. We fully support the OpenAI API structure, and you can seamlessly use the features that the OpenAI SDK provides out-of-the-box, including: * Streaming * Completions * Chat Completions * Audio * Beta Assistants * Beta Threads * Embeddings * Image Generation * Uploads This support provides easy integration into systems already using OpenAI's standards. For example, you can integrate our API into any product that supports LLM models by updating only two things in the configuration: the base URL and the API key. {% hint style="info" %} [How do I configure the base URL and API key?](https://docs.aimlapi.com/quickstart/setting-up) {% endhint %} *** ## REST API Because we support the OpenAI API structure, our API can be used with the same endpoints as OpenAI. You can call them from any environment. ### Authorization AI/ML API authorization is based on a Bearer token. You need to include it in the `Authorization` HTTP header within the request, on example: ```http Authorization: Bearer ``` ### Request Example When your token is ready you can call our API through HTTP. {% tabs %} {% tab title="JavaScript" %} ```javascript fetch("https://api.aimlapi.com/chat/completions", { method: "POST", headers: { Authorization: "Bearer ", "Content-Type": "application/json", }, body: JSON.stringify({ model: "gpt-4o", messages: [ { role: "user", content: "What kind of model are you?", }, ], max_tokens: 512, stream: false, }), }) .then((res) => res.json()) .then(console.log); ``` {% endtab %} {% tab title="Python" %} ```python import requests import json response = requests.post( url="https://api.aimlapi.com/chat/completions", headers={ "Authorization": "Bearer ", "Content-Type": "application/json", }, data=json.dumps( { "model": "gpt-4o", "messages": [ { "role": "user", "content": "What kind of model are you?", }, ], "max_tokens": 512, "stream": False, } ), ) response.raise_for_status() print(response.json()) ``` {% endtab %} {% tab title="cURL" %} ```ruby curl --request POST \ --url https://api.aimlapi.com/chat/completions \ --header 'Authorization: Bearer ' \ --header 'Content-Type: application/json' \ --data '{ "model": "gpt-4o", "messages": [ { "role": "user", "content": "What kind of model are you?" } ], "max_tokens": 512, "stream": false }' ``` {% endtab %} {% endtabs %} *** ## AI/ML API Python library We have started developing our own SDK to simplify the use of our service. Currently, it supports only chat completion and embedding models. {% hint style="success" %} If you’d like to contribute to expanding its functionality, feel free to reach out to us on [**Discord**](https://discord.com/invite/hvaUsJpVJf)! {% endhint %} ### Installation After obtaining your AIML API key, create an .env file and copy the required contents into it. ```shell touch .env ``` Copy the code below, paste it into your `.env` file, and set your API key in `AIML_API_KEY=""`, replacing `` with your actual key: ```shell AIML_API_KEY = "" AIML_API_URL = "https://api.aimlapi.com/v1" ``` Install `aiml_api` package: ```shell # install from PyPI pip install aiml_api ``` ### Request Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python from aiml_api import AIML_API api = AIML_API() completion = api.chat.completions.create( model = "mistralai/Mistral-7B-Instruct-v0.2", messages = [ {"role": "user", "content": "Explain the importance of low-latency LLMs"}, ], temperature = 0.7, max_tokens = 256, ) response = completion.choices[0].message.content print("AI:", response) ``` {% endcode %} {% endtab %} {% endtabs %} To execute the script, use: ```shell python3 .py ``` *** ## Next Steps * [Check our full list of model IDs](https://docs.aimlapi.com/api-references/model-database) [^1]: Software Development Kits --- # Source: https://docs.aimlapi.com/api-references/3d-generating-models/tencent.md # Source: https://docs.aimlapi.com/api-references/video-models/tencent.md # Source: https://docs.aimlapi.com/api-references/image-models/tencent.md # Tencent - [Hunyuan Image v3](/api-references/image-models/tencent/hunyuan-image-v3-text-to-image.md) --- # Source: https://docs.aimlapi.com/api-references/text-models-llm/minimax/text-01.md # text-01

This documentation is valid for the following list of our models:

  • MiniMax-Text-01
Try in Playground
## Model Overview A powerful language model developed by MiniMax AI, designed to excel in tasks requiring extensive context processing and reasoning capabilities. With a total of 456 billion parameters, of which 45.9 billion are activated per token, this model utilizes a hybrid architecture that combines various attention mechanisms to optimize performance across a wide array of applications. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Insert your question or request into the `content` field—this is what the model will respond to. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `model` and `messages` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schema), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## API Schema ## POST /v1/chat/completions > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/chat/completions":{"post":{"operationId":"_v1_chat_completions","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["MiniMax-Text-01"]},"messages":{"type":"array","items":{"oneOf":[{"type":"object","properties":{"role":{"type":"string","enum":["user"],"description":"The role of the author of the message — in this case, the user"},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"type":{"type":"string","enum":["image_url"]},"image_url":{"type":"object","properties":{"url":{"type":"string","format":"uri","description":"Either a URL of the image or the base64 encoded image data. "},"detail":{"type":"string","enum":["low","high","auto"],"description":"Specifies the detail level of the image. Currently supports JPG/JPEG, PNG, GIF, and WEBP formats."}},"required":["url"]}},"required":["type","image_url"]}]}}],"description":"The contents of the user message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"]},{"type":"object","properties":{"role":{"type":"string","enum":["system"],"description":"The role of the author of the message — in this case, the system."},"content":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]}}],"description":"The contents of the system message."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["tool"],"description":"The role of the author of the message — in this case, the tool."},"content":{"type":"string","description":"The contents of the tool message."},"tool_call_id":{"type":"string","description":"Tool call that this message is responding to."},"name":{"type":"string","nullable":true,"description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."}},"required":["role","content","tool_call_id"],"additionalProperties":false},{"type":"object","properties":{"role":{"type":"string","enum":["assistant"],"description":"The role of the author of the message — in this case, the Assistant."},"content":{"anyOf":[{"type":"string","description":"The contents of the Assistant message."},{"type":"array","items":{"anyOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},{"type":"object","properties":{"refusal":{"type":"string","description":"The refusal message generated by the model."},"type":{"type":"string","enum":["refusal"],"description":"The type of the content part."}},"required":["refusal","type"]}]},"description":"An array of content parts with a defined type. Can be one or more of type text, or exactly one of type refusal."}],"description":"The contents of the Assistant message. Required unless tool_calls or function_call is specified."},"name":{"type":"string","description":"An optional name for the participant. Provides the model information to differentiate between participants of the same role."},"tool_calls":{"type":"array","items":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."},"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."}},"required":["name","arguments"],"description":"The function that the model called."}},"required":["id","type","function"]},"description":"The tool calls generated by the model, such as function calls."},"refusal":{"type":"string","nullable":true,"description":"The refusal message by the Assistant."}},"required":["role"]}]},"description":"A list of messages comprising the conversation so far. Depending on the model you use, different message types (modalities) are supported, like text, documents (txt, pdf), images, and audio."},"max_tokens":{"type":"number","minimum":1,"description":"The maximum number of tokens that can be generated in the chat completion. This value can be used to control costs for text generated via API."},"stream":{"type":"boolean","default":false,"description":"If set to True, the model response data will be streamed to the client as it is generated using server-sent events."},"stream_options":{"type":"object","properties":{"include_usage":{"type":"boolean"}},"required":["include_usage"]},"tools":{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"description":{"type":"string","description":"A description of what the function does, used by the model to choose when and how to call the function."},"name":{"type":"string","description":"The name of the function to be called. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"parameters":{"type":"object","additionalProperties":{"nullable":true,"description":"The parameters the functions accepts, described as a JSON Schema object."}},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the function call. If set to True, the model will follow the exact schema defined in the parameters field. Only a subset of JSON Schema is supported when strict is True."}},"required":["name","parameters"],"additionalProperties":false}},"required":["type","function"],"additionalProperties":false},"description":"A list of tools the model may call. Currently, only functions are supported as a tool. Use this to provide a list of functions the model may generate JSON inputs for. A max of 128 functions are supported."},"tool_choice":{"anyOf":[{"type":"string","enum":["none","auto","required"],"description":"none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools."},{"type":"object","properties":{"type":{"type":"string","enum":["function"],"description":"The type of the tool. Currently, only function is supported."},"function":{"type":"object","properties":{"name":{"type":"string","description":"The name of the function to call."}},"required":["name"]}},"required":["type","function"],"description":"Specifies a tool the model should use. Use to force the model to call a specific function."}],"description":"Controls which (if any) tool is called by the model. none means the model will not call any tool and instead generates a message. auto means the model can pick between generating a message or calling one or more tools. required means the model must call one or more tools. Specifying a particular tool via {\"type\": \"function\", \"function\": {\"name\": \"my_function\"}} forces the model to call that tool.\n none is the default when no tools are present. auto is the default if tools are present."},"parallel_tool_calls":{"type":"boolean","description":"Whether to enable parallel function calling during tool use."},"temperature":{"type":"number","minimum":0,"maximum":1,"description":"What sampling temperature to use. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or top_p but not both."},"top_p":{"type":"number","minimum":0.01,"maximum":1,"description":"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.\n We generally recommend altering this or temperature but not both."},"prediction":{"type":"object","properties":{"type":{"type":"string","enum":["content"],"description":"The type of the predicted content you want to provide."},"content":{"anyOf":[{"type":"string","description":"The content used for a Predicted Output. This is often the text of a file you are regenerating with minor changes."},{"type":"array","items":{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of the content part."},"text":{"type":"string","description":"The text content."}},"required":["type","text"]},"description":"An array of content parts with a defined type. Supported options differ based on the model being used to generate the response. Can contain text inputs."}],"description":"The content that should be matched when generating a model response. If generated tokens would match this content, the entire model response can be returned much more quickly."}},"required":["type","content"],"description":"Configuration for a Predicted Output, which can greatly improve response times when large parts of the model response are known ahead of time."},"presence_penalty":{"type":"number","nullable":true,"minimum":-2,"maximum":2,"description":"Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics."},"seed":{"type":"integer","minimum":1,"description":"This feature is in Beta. If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result."},"response_format":{"oneOf":[{"type":"object","properties":{"type":{"type":"string","enum":["text"],"description":"The type of response format being defined. Always text."}},"required":["type"],"additionalProperties":false,"description":"Default response format. Used to generate text responses."},{"type":"object","properties":{"type":{"type":"string","enum":["json_object"],"description":"The type of response format being defined. Always json_object."}},"required":["type"],"additionalProperties":false,"description":"An older method of generating JSON responses. Using json_schema is recommended for models that support it. Note that the model will not generate JSON without a system or user message instructing it to do so."},{"type":"object","properties":{"type":{"type":"string","enum":["json_schema"],"description":"The type of response format being defined. Always json_schema."},"json_schema":{"type":"object","properties":{"name":{"type":"string","description":"The name of the response format. Must be a-z, A-Z, 0-9, or contain underscores and dashes, with a maximum length of 64."},"schema":{"type":"object","additionalProperties":{"nullable":true},"description":"The schema for the response format, described as a JSON Schema object."},"strict":{"type":"boolean","nullable":true,"description":"Whether to enable strict schema adherence when generating the output. If set to True, the model will always follow the exact schema defined in the schema field. Only a subset of JSON Schema is supported when strict is True."},"description":{"type":"string","description":"A description of what the response format is for, used by the model to determine how to respond in the format."}},"required":["name"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}},"required":["type","json_schema"],"additionalProperties":false,"description":"JSON Schema response format. Used to generate structured JSON responses."}],"description":"An object specifying the format that the model must output."},"mask_sensitive_info":{"type":"boolean","default":false,"description":"Mask (replace with ***) content in the output that involves private information, including but not limited to email, domain, link, ID number, home address, etc. Defaults to False, i.e. enable masking."}},"required":["model","messages"],"title":"MiniMax-Text-01"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"object":{"type":"string","enum":["chat.completion"],"description":"The object type."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"choices":{"type":"array","items":{"type":"object","properties":{"index":{"type":"number","description":"The index of the choice in the list of choices."},"message":{"type":"object","properties":{"role":{"type":"string","description":"The role of the author of this message."},"content":{"type":"string","description":"The contents of the message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"annotations":{"type":"array","nullable":true,"items":{"type":"object","properties":{"type":{"type":"string","enum":["url_citation"],"description":"The type of the URL citation. Always url_citation."},"url_citation":{"type":"object","properties":{"end_index":{"type":"integer","description":"The index of the last character of the URL citation in the message."},"start_index":{"type":"integer","description":"The index of the first character of the URL citation in the message."},"title":{"type":"string","description":"The title of the web resource."},"url":{"type":"string","description":"The URL of the web resource."}},"required":["end_index","start_index","title","url"],"description":"A URL citation when using web search."}},"required":["type","url_citation"]},"description":"Annotations for the message, when applicable, as when using the web search tool."},"audio":{"type":"object","nullable":true,"properties":{"id":{"type":"string","description":"Unique identifier for this audio response."},"data":{"type":"string","description":"Base64 encoded audio bytes generated by the model, in the format specified in the request."},"transcript":{"type":"string","description":"Transcript of the audio generated by the model."},"expires_at":{"type":"integer","description":"The Unix timestamp (in seconds) for when this audio response will no longer be accessible on the server for use in multi-turn conversations."}},"required":["id","data","transcript","expires_at"],"description":"A chat completion message generated by the model."},"tool_calls":{"type":"array","nullable":true,"items":{"oneOf":[{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string","description":"The name of the function to call."}},"required":["arguments","name"],"description":"The function that the model called."}},"required":["id","type","function"]},{"type":"object","properties":{"id":{"type":"string","description":"The ID of the tool call."},"type":{"type":"string","enum":["custom"],"description":"The type of the tool."},"custom":{"type":"object","properties":{"input":{"type":"string","description":"The input for the custom tool call generated by the model."},"name":{"type":"string","description":"The name of the custom tool to call."}},"required":["input","name"],"description":"The custom tool that the model called."}},"required":["id","type","custom"]}]},"description":"The tool calls generated by the model, such as function calls."}},"required":["role","content"],"description":"A chat completion message generated by the model."},"finish_reason":{"type":"string","enum":["stop","length","content_filter","tool_calls"],"description":"The reason the model stopped generating tokens. This will be stop if the model hit a natural stop point or a provided stop sequence, length if the maximum number of tokens specified in the request was reached, content_filter if content was omitted due to a flag from our content filters, tool_calls if the model called a tool"},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message content tokens with log probability information."},"refusal":{"type":"array","items":{"type":"object","properties":{"bytes":{"type":"array","items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"bytes":{"type":"array","nullable":true,"items":{"type":"integer"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"token":{"type":"string","description":"The token."}},"required":["logprob","token"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["bytes","logprob","token"]},"description":"A list of message refusal tokens with log probability information."}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["index","message","finish_reason"]}},"model":{"type":"string","description":"The model used for the chat completion."},"usage":{"type":"object","properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","object","created","choices","model","usage"]}},"text/event-stream":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"A unique identifier for the chat completion."},"choices":{"type":"array","items":{"type":"object","properties":{"delta":{"type":"object","nullable":true,"properties":{"content":{"type":"string","description":"The contents of the chunk message."},"refusal":{"type":"string","nullable":true,"description":"The refusal message generated by the model."},"role":{"type":"string","enum":["user","assistant","developer","system","tool"],"description":"The role of the author of this message."},"tool_calls":{"type":"array","nullable":true,"items":{"type":"object","properties":{"index":{"type":"number"},"id":{"type":"string","description":"The ID of the tool call."},"function":{"type":"object","properties":{"arguments":{"type":"string","description":"The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may hallucinate parameters not defined by your function schema. Validate the arguments in your code before calling your function."},"name":{"type":"string"}},"required":["arguments","name"],"description":"The function that the model called."},"type":{"type":"string","enum":["function"],"description":"The type of the tool."}},"required":["index","id","function","type"]},"description":"The tool calls generated by the model, such as function calls."}},"required":["content","role"],"description":"A chat completion delta generated by streamed model responses."},"finish_reason":{"type":"string","enum":["length","function_call","stop","tool_calls","content_filter"]},"index":{"type":"number","description":"The index of the choice in the list of choices."},"logprobs":{"type":"object","nullable":true,"properties":{"content":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}},"refusal":{"type":"array","items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."},"top_logprobs":{"type":"array","nullable":true,"items":{"type":"object","properties":{"token":{"type":"string","description":"The token."},"bytes":{"type":"array","items":{"type":"number"},"description":"A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be null if there is no bytes representation for the token."},"logprob":{"type":"number","description":"The log probability of this token, if it is within the top 20 most likely tokens. Otherwise, the value -9999.0 is used to signify that the token is very unlikely."}},"required":["token","bytes","logprob"]},"description":"List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested top_logprobs returned."}},"required":["token","bytes","logprob"]}}},"required":["content","refusal"],"description":"Log probability information for the choice."}},"required":["finish_reason","index"]},"description":"A list of chat completion choices. Can be more than one if n is greater than 1."},"created":{"type":"number","description":"The Unix timestamp (in seconds) of when the chat completion was created."},"model":{"type":"string","description":"The model used for the chat completion."},"object":{"type":"string","enum":["chat.completion.chunk"],"description":"The object type."},"service_tier":{"type":"string","nullable":true,"enum":["auto","default","flex","scale","priority"],"description":"Specifies the processing type used for serving the request."},"usage":{"type":"object","nullable":true,"properties":{"prompt_tokens":{"type":"number","description":"Number of tokens in the prompt."},"completion_tokens":{"type":"number","description":"Number of tokens in the generated completion."},"total_tokens":{"type":"number","description":"Total number of tokens used in the request (prompt + completion)."},"completion_tokens_details":{"type":"object","nullable":true,"properties":{"accepted_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that appeared in the completion."},"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens generated by the model."},"reasoning_tokens":{"type":"integer","nullable":true,"description":"Tokens generated by the model for reasoning."},"rejected_prediction_tokens":{"type":"integer","nullable":true,"description":"When using Predicted Outputs, the number of tokens in the prediction that did not appear in the completion. However, like reasoning tokens, these tokens are still counted in the total completion tokens for purposes of billing, output, and context window limits."}},"description":"Breakdown of tokens used in a completion."},"prompt_tokens_details":{"type":"object","nullable":true,"properties":{"audio_tokens":{"type":"integer","nullable":true,"description":"Audio input tokens present in the prompt."},"cached_tokens":{"type":"integer","nullable":true,"description":"Cached tokens present in the prompt."}},"description":"Breakdown of tokens used in the prompt."}},"required":["prompt_tokens","completion_tokens","total_tokens"],"description":"Usage statistics for the completion request."}},"required":["id","choices","created","model","object"]}}}}}}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/chat/completions", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json" }, json={ "model":"MiniMax-Text-01", "messages":[ { "role":"user", "content":"Hello" # insert your prompt here, instead of Hello } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/chat/completions', { method: 'POST', headers: { // insert your AIML API Key instead of 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'MiniMax-Text-01', messages:[ { role:'user', content: 'Hello' // insert your prompt here, instead of Hello } ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "id": "04a9c0b5acca8b79bf1aba62f288f3b7", "object": "chat.completion", "choices": [ { "index": 0, "finish_reason": "stop", "message": { "role": "assistant", "content": "Hello! How are you doing today? I'm here and ready to chat about anything you'd like to discuss or help with any questions you might have." } } ], "created": 1750764981, "model": "MiniMax-Text-01", "usage": { "prompt_tokens": 299, "completion_tokens": 67, "total_tokens": 366 } } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/embedding-models/openai/text-embedding-3-large.md # text-embedding-3-large {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `text-embedding-3-large` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A next-generation embedding model that offers superior performance and flexibility. It converts text into high-dimensional numerical representations that are highly effective for various machine learning tasks. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [text-embedding-3-large.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-600ce848d24e68a2fa757ad90bb3c615de4385b5%2Ftext-embedding-3-large.json?alt=media\&token=13ab8b8a-a861-42a2-adb9-ad9bee48d24d) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="text-embedding-3-large"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "text-embedding-3-large", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.02531846985220909, -0.04148460552096367, -0.018977636471390724, 0.022566787898540497, -0.058921895921230316, -0.00015363717102445662, -0.022701380774378777, 0.007440011017024517, -0.01123105175793171, 0.05341853201389313, -0.006075385957956314, 0.024376317858695984, -0.04139487445354462, -0.011717082932591438, -0.0145958811044693, -0.06783495843410492, -0.03971993923187256, -0.010206648148596287, 0.0009472928941249847, 0.018185032531619072, 0.020099246874451637, 0.013436884619295597, -0.01047583483159542, 0.03394738584756851, 0.016435321420431137, 0.017975665628910065, 0.007881177589297295, 0.01812521368265152, 8.388706191908568e-05, -0.01665964350104332, 0.04175379127264023, 0.011769424192607403, -0.0013188261073082685, -0.04145469516515732, -0.03427639231085777, -0.022536877542734146, 0.02482496201992035, -0.01276391837745905, -0.024780096486210823, -0.04112568870186806, -0.007193257100880146, 0.01410984992980957, -0.019874924793839455, 0.0009944939520210028, 0.013788321986794472, -0.004766841419041157, 0.011739514768123627, 0.046060770750045776, -0.0029460948426276445, 0.008756033144891262, -0.006509074941277504, 0.027965469285845757, -0.006961457431316376, 0.011141323484480381, -0.0031236831564456224, -0.03397729620337486, -0.013167697936296463, 0.0018693495076149702, -0.022970566526055336, -0.022776154801249504, 0.008980355225503445, 0.0056641292758286, -0.026604581624269485, -0.0019310381030663848, -0.002261912915855646, -0.004243423230946064, 0.012659234926104546, 0.05114540457725525, -0.02865338884294033, 0.04217252507805824, -0.002891883719712496, -0.004763102624565363, -0.006310923956334591, 0.015463259071111679, 0.030193733051419258, -0.03801509365439415, 0.0038882470689713955, 0.008157840929925442, -0.04417646676301956, -0.031046157702803612, 0.04205288738012314, -0.04884236305952072, 0.032033175230026245, 0.0067819999530911446, -0.014147236943244934, 0.031135885044932365, -0.0041200462728738785, -0.07543199509382248, -0.0027797226794064045, 0.010715111158788204, 0.0190225001424551, 0.00809802208095789, 0.007036231458187103, 0.015837129205465317, 0.005522058345377445, -0.02005438134074211, -0.012098429724574089, -0.01685405522584915, -0.010288899764418602, 0.0022189179435372353, -0.014603358693420887, -0.004920127801597118, 0.00269934069365263, -0.035532597452402115, 0.002230134094133973, -0.02102644369006157, 0.006524029653519392, 0.03774590417742729, 0.004609815776348114, -0.003047039732336998, 0.01812521368265152, -0.010057100094854832, -0.008875671774148941, 0.013451838865876198, -0.02718781866133213, 0.04872272536158562, -0.03514377027750015, 0.019351506605744362, 0.03481476381421089, -0.012704099528491497, 0.028967440128326416, -0.0379851832985878, 0.01592685841023922, 0.024436136707663536, 0.014917409047484398, -0.00017548518371768296, 0.014349127188324928, 0.020009517669677734, 0.029146898537874222, 0.03427639231085777, -0.021011488512158394, 0.07208211719989777, -0.0056716063991189, -0.028070151805877686, 0.01866358518600464, -0.005047243554145098, -0.00673713581636548, 0.006826864555478096, 0.0105954734608531, -0.00933179259300232, 0.012352661229670048, -0.008352253586053848, 0.01869349554181099, -0.014184623956680298, 0.000802418275270611, -0.03876283019781113, -0.0038994632195681334, -0.013563999906182289, -0.003082557348534465, -0.024944599717855453, -0.0009318707161583006, 0.007679287809878588, 0.020607709884643555, 0.03149480000138283, 0.010513221845030785, 0.019575828686356544, -0.03011895902454853, -0.008853239007294178, -0.019800150766968727, -0.001506695756688714, -0.006138944067060947, 0.00866630394011736, 0.02143022231757641, 0.00021298900537658483, -0.02824961021542549, 0.006550200749188662, 0.03215281292796135, -0.00937665719538927, -0.013302290812134743, 0.015702536329627037, -0.02815988101065159, -0.005293997935950756, 0.012419958598911762, 0.02672422118484974, 0.012180681340396404, 0.012614370323717594, 0.02168445475399494, 0.023942628875374794, 0.015448304824531078, -0.030896609649062157, 0.0055706617422401905, 0.024839917197823524, 0.008808375336229801, -0.049560192972421646, -0.0452532134950161, 0.015852084383368492, 0.02859356999397278, -0.009122425690293312, 0.02431649900972843, -0.02458568476140499, -0.02431649900972843, -0.024047311395406723, -0.026469988748431206, -0.05763578414916992, -0.02143022231757641, 0.0044228811748325825, 0.04647950828075409, -0.0037386990152299404, -0.005877234973013401, 0.025168921798467636, -0.008748555555939674, 0.00466589629650116, 0.04857317730784416, 0.0077166748233139515, 0.006221195217221975, 0.0022020938340574503, 0.007111005485057831, 0.023837944492697716, -0.003839643904939294, 0.014782816171646118, 0.020532935857772827, 0.05895180627703667, 0.04698796942830086, 0.004426619503647089, 0.009892597794532776, -0.005379987880587578, -0.0014711780240759254, 0.033199649304151535, 0.01579226553440094, -0.009922507219016552, 0.04627013951539993, 0.011971314437687397, -0.011492760851979256, 0.03035823628306389, 0.010251512750983238, 0.030193733051419258, -0.008875671774148941, -0.013414451852440834, -0.029371220618486404, -0.01831962540745735, 0.009526205249130726, 0.02685881406068802, 0.00937665719538927, 0.03774590417742729, 0.006187546998262405, 0.008584053255617619, -0.004979947116225958, 0.0047182380221784115, 0.017736388370394707, 0.004493916407227516, -0.021175991743803024, -0.018917817622423172, 0.002486234763637185, -0.04061722755432129, -0.008255047723650932, -0.04241180047392845, 0.08416559547185898, -0.005794983357191086, 0.03412684425711632, 0.0071184830740094185, -0.01888790726661682, -0.038703013211488724, 0.0027909388300031424, -0.015567942522466183, -0.03248181566596031, -0.006228672806173563, -0.01016178447753191, -0.02925158105790615, 0.029236625880002975, -0.005727686919271946, -0.02669431082904339, -0.02542315423488617, -0.006946502719074488, 0.004950037691742182, -0.0050360276363790035, -0.05500373989343643, -0.027965469285845757, -0.022148054093122482, 0.04100605100393295, -0.03185371682047844, -0.032033175230026245, 0.03622051700949669, 0.03248181566596031, 0.026365306228399277, 0.010468357242643833, -0.02621575817465782, -0.03071715123951435, 0.010902046225965023, 0.0034657740034163, -0.004224729724228382, 0.032571546733379364, 0.018334580585360527, 0.03394738584756851, -0.0322425402700901, -0.024032358080148697, -0.011440418660640717, 0.07375705242156982, -0.017975665628910065, 0.023164980113506317, 0.02528855949640274, -0.021116172894835472, 0.006654884200543165, 0.02548297308385372, 0.020532935857772827, 0.01812521368265152, -0.014311740174889565, 0.0010860920883715153, -0.005611787084490061, -0.0012019916903227568, 0.008681259118020535, 0.02012915536761284, 0.02168445475399494, -0.03846373409032822, -0.06358779221773148, 0.03179389610886574, 0.022432195022702217, -0.027367277070879936, -0.009279451332986355, -0.009107470512390137, 0.04465502128005028, 0.008838284760713577, 0.04333899915218353, 0.011320780962705612, -0.02655971795320511, -0.0036601864267140627, -0.019411325454711914, -0.014035075902938843, -0.004015362821519375, 0.026036299765110016, -0.019411325454711914, 0.043997012078762054, 0.04324927181005478, -0.045761674642562866, 0.030044184997677803, 0.01885799877345562, 0.011567534878849983, 0.003233974566683173, -0.02632044069468975, 0.03221262991428375, -0.02005438134074211, 0.007578343152999878, -0.010685201734304428, -0.016210999339818954, -0.01576235517859459, 0.05150431767106056, 0.028937529772520065, -0.021998506039381027, 0.0077316295355558395, 0.015149208717048168, -0.02012915536761284, 0.0067670452408492565, 0.03028346225619316, 0.016345592215657234, 0.03921147435903549, 0.013025627471506596, -0.021445177495479584, 0.014954796060919762, 0.03717762231826782, -0.0024338930379599333, 0.02102644369006157, 0.0009019611752592027, -0.02838420309126377, 0.005952008999884129, 0.020712392404675484, 0.029445994645357132, -0.03487458452582359, -0.010782408528029919, -0.05455509573221207, -0.01959078386425972, 0.019545918330550194, 0.018349535763263702, 0.009832778945565224, -0.013085446320474148, -0.01616613380610943, 0.04704779013991356, 0.03535313904285431, -0.010386105626821518, -0.05808442831039429, 0.03269118443131447, 0.023508939892053604, 0.004404187668114901, -0.012853647582232952, -0.05954999849200249, -0.01882808841764927, -0.04357827454805374, 0.017616750672459602, -0.00827747955918312, -0.011059071868658066, -0.00506593706086278, 0.0031722860876470804, -0.015837129205465317, 0.019904833287000656, -0.04321936145424843, 0.007036231458187103, 0.004366800654679537, 0.018633676692843437, -0.010341241955757141, -0.03215281292796135, -0.015807218849658966, 0.00015936203999444842, -0.015418394468724728, -0.016405411064624786, -0.017781252041459084, 0.03457548841834068, 0.018573855981230736, -0.011814288794994354, 0.011455373838543892, 0.0048677860759198666, -0.025139013305306435, -0.024406228214502335, -0.08578070998191833, -0.018035484477877617, -0.002260043518617749, -0.015403440222144127, -0.0019889879040420055, -0.008202705532312393, -0.0009215893223881721, -0.04417646676301956, 0.04274080693721771, -0.0219237320125103, 0.0008318605250678957, -0.01655495911836624, -0.0031928489916026592, -0.013526612892746925, 0.0014319217298179865, 0.009384134784340858, 0.03678879886865616, 0.007694242522120476, -0.03418666496872902, 0.006636190693825483, 0.037865545600652695, 0.08069608360528946, 0.02892257645726204, 0.04917136952280998, -0.017452247440814972, 0.012113384902477264, 0.02655971795320511, -0.004407925996929407, 0.020278703421354294, -0.029475903138518333, 0.02455577626824379, -0.03128543496131897, -0.012771395966410637, -0.013384542427957058, -0.028638435527682304, -0.013601386919617653, 0.0036788799334317446, 0.004796750843524933, -0.037835635244846344, 0.01402759924530983, -0.030403099954128265, -0.0120759978890419, 6.787345228076447e-06, 0.04698796942830086, -0.01310787908732891, -0.010745021514594555, 0.0016412888653576374, 0.01287607941776514, -0.004587383940815926, 0.014050031080842018, -0.0037200055085122585, -0.05485419183969498, -0.043398819863796234, -0.03128543496131897, -0.02002447284758091, -0.015867039561271667, 0.02292570285499096, 0.06062674522399902, -0.047526340931653976, -0.009227109141647816, -0.002691863337531686, 0.02922167256474495, 0.009047651663422585, 0.002061892533674836, -0.04881245642900467, -0.017018558457493782, 0.0017889675218611956, 0.024914691224694252, -0.01287607941776514, 0.03340901434421539, 0.048094622790813446, -0.046928148716688156, 0.026499899104237556, 0.008314866572618484, -0.008337299339473248, 0.013534090481698513, -0.008823329582810402, -0.017033513635396957, -0.030866699293255806, 0.006067908369004726, 0.03296037018299103, 0.014154714532196522, 0.04701787978410721, 0.017242880538105965, 0.0554523840546608, -0.02582693286240101, 0.020712392404675484, 0.03918156772851944, -0.01636054739356041, -0.0409761406481266, -0.00719699589535594, 0.039600301533937454, -0.032870642840862274, -0.020308613777160645, 0.009204677306115627, -0.01127591636031866, 0.028473932296037674, -0.039600301533937454, -0.0032956632785499096, -0.006512813735753298, 0.03980966657400131, 0.05584120750427246, -0.025243695825338364, 0.012950853444635868, 0.0003451286465860903, -0.03565223515033722, -0.006467949133366346, -0.0069016385823488235, -0.01367616094648838, -0.0336482897400856, 0.0003509703674353659, 0.05018829554319382, -0.008337299339473248, 0.003828427754342556, -0.011141323484480381, 0.025213787332177162, -0.012360138818621635, 0.005996873136609793, -0.008584053255617619, -0.02428658865392208, 0.00489769596606493, 0.005200530402362347, 0.03735708072781563, 0.0043032425455749035, -0.026469988748431206, 0.010573040693998337, -0.005873496178537607, 0.01336211059242487, -0.03302019089460373, 0.014079940505325794, -0.025662429630756378, 0.005723948124796152, 0.06556182354688644, 0.012188158929347992, -0.002142274519428611, 0.006733397021889687, 0.010715111158788204, 0.010887091979384422, -0.009802868589758873, -0.016480185091495514, 0.0070848348550498486, -0.025243695825338364, 0.012046088464558125, -0.00020621261501219124, 0.05676840618252754, 0.013339677825570107, -0.012659234926104546, -0.01482768077403307, -0.008419550023972988, 0.00497246952727437, -0.007043709047138691, -0.0012702230596914887, 0.021205900236964226, -0.008180273696780205, -0.010984297841787338, -0.0029479642398655415, 0.006681055296212435, 0.01912718452513218, 0.017751343548297882, 0.00687172869220376, 0.006916593294590712, -0.03328937664628029, -0.013519136235117912, -0.0009365440928377211, -0.002861974062398076, -0.002667561871930957, 0.03155462071299553, 0.019501054659485817, -0.03331928700208664, -0.02575215883553028, -0.0020282443147152662, -0.031674258410930634, -0.009675753302872181, -0.0005645435303449631, -0.024929644539952278, -0.0012038610875606537, 0.004968731198459864, 0.02458568476140499, -0.021579770371317863, -0.004695806186646223, 0.04130514711141586, -0.024181906133890152, -0.0013412582920864224, -0.007656855508685112, 0.00683060334995389, 0.0035087689757347107, -0.0050210729241371155, 0.009122425690293312, 0.02002447284758091, 0.022566787898540497, -0.02232751064002514, -0.04247162118554115, 0.03568214550614357, 0.0008828003192320466, -0.0106104277074337, 0.01404255349189043, 0.002125450409948826, 0.017870981246232986, -0.025497928261756897, -0.025333425030112267, 0.008726123720407486, 0.024107132107019424, -0.04791516810655594, 0.008800897747278214, -0.002454455941915512, 0.012846169993281364, 0.01875331439077854, 0.0019927266985177994, -0.021175991743803024, -0.0116647407412529, -0.00653150724247098, 0.002867582254111767, 0.002618958707898855, 0.01256202906370163, 0.004542519338428974, -0.013750934973359108, 0.014334172010421753, 0.010535653680562973, -0.024510910734534264, 0.021146081387996674, 0.007769016548991203, -0.012397525832057, -0.00606417004019022, -0.011006729677319527, -0.031704168766736984, 0.001708585419692099, 0.0005678149173036218, -0.013160220347344875, 0.0025198832154273987, 0.019411325454711914, 0.027905650436878204, -0.00266943103633821, 0.0009201873326674104, -0.014670655131340027, -0.0005463173729367554, 0.025647476315498352, 0.008479369804263115, 0.014805248007178307, 0.036370065063238144, -0.014356604777276516, -0.06801441311836243, -0.0053725107572972775, 0.0021890082862228155, 0.03122561424970627, -0.021938685327768326, 0.0262606218457222, 0.0013094793539494276, 0.0022675208747386932, 0.03544286638498306, 0.024630548432469368, -0.02335939183831215, 0.02439127303659916, 0.03421657532453537, 0.015313711017370224, 0.0025516620371490717, -0.011866630986332893, 0.03529331833124161, -0.011440418660640717, -0.01709333248436451, 0.04447556287050247, -0.015493168495595455, -0.026739176362752914, 0.011657264083623886, -0.009271973744034767, -0.02965536154806614, 0.004807966761291027, 0.000811765028629452, -0.02151995152235031, -0.0365196131169796, 0.02175922878086567, 0.002648868365213275, -0.02895248495042324, -0.027471961453557014, -0.02146013267338276, 0.015358575619757175, -0.01054313126951456, 0.0012711576418951154, 0.004639725666493177, -0.008681259118020535, -0.028578614816069603, -0.010924478992819786, -0.01882808841764927, 0.016719462350010872, 0.005462239496409893, 0.026335395872592926, -0.022806063294410706, 0.013593909330666065, 0.02095166966319084, 0.003454557852819562, 0.0010066446848213673, -0.04100605100393295, -0.013810754753649235, 0.006696010008454323, 0.019336551427841187, -0.004363061860203743, -0.017975665628910065, -0.007656855508685112, 0.0010206648148596287, -0.036968257278203964, -0.02738223224878311, 0.01939637027680874, -0.02881789207458496, 0.020802121609449387, 0.02032356895506382, 0.009974849410355091, 0.0167942363768816, -0.0009276646887883544, -0.028040243312716484, 0.0029853512533009052, -0.029894636943936348, 0.02335939183831215, -0.02325470745563507, 0.010281422175467014, 0.03562232479453087, 0.01264427974820137, -0.02436136268079281, 0.0027572906110435724, -0.012584460899233818, -0.0016954999882727861, 0.005279043223708868, 0.017212970182299614, -0.00336856790818274, -0.0075820814818143845, 0.021235810592770576, 0.01609136164188385, 0.017183061689138412, 0.027038272470235825, -0.03448576107621193, 0.012412481009960175, 0.004430358298122883, 0.01342940703034401, 0.008053157478570938, 0.023075250908732414, -0.012786351144313812, 0.040766775608062744, 0.020114200189709663, -0.002871320815756917, 0.013122833333909512, 0.01612127013504505, 0.015493168495595455, 0.030627422034740448, -0.023209843784570694, 0.009242064319550991, 0.008741077966988087, -0.05359799042344093, 0.010445925407111645, 8.306922245537862e-05, 0.004321936052292585, -0.012158249504864216, -0.011769424192607403, 0.004677112679928541, -0.0018889777129516006, -0.007234382443130016, 0.016151180490851402, -0.026499899104237556, 0.04734688624739647, 0.019037455320358276, 0.0160614512860775, -0.011754469946026802, -0.026769084855914116, 0.003413432277739048, -0.013915438205003738, -0.028144925832748413, -0.009593501687049866, -0.01379579957574606, 0.018140166997909546, 0.0012683536624535918, -0.03266127407550812, -0.0197552852332592, -0.006269798148423433, 0.001232836046256125, -0.012307797558605671, -0.01695873960852623, -0.010124397464096546, -0.0008309258846566081, -0.013444362208247185, 0.05063693970441818, -0.01785602606832981, -0.010759975761175156, -0.006094079464673996, 0.0016225953586399555, -0.02965536154806614, 0.0063034468330442905, -0.03014886938035488, -1.7700403986964375e-05, -0.01831962540745735, -0.0012889164499938488, -0.0027479438576847315, -0.0035199851263314486, -0.01422948855906725, 0.01422948855906725, 0.0008542927098460495, 0.00803072564303875, -0.01855890266597271, 0.008651349693536758, 0.02162463590502739, 0.0062174564227461815, -0.023000476881861687, -0.0003762065898627043, -0.013040582649409771, 0.01802052929997444, -0.0024264156818389893, 0.027696281671524048, -0.005525797139853239, 0.01573244482278824, -0.03266127407550812, 0.03361838310956955, -0.04734688624739647, -0.007447488605976105, 0.019545918330550194, 0.04962001368403435, -0.03071715123951435, 0.020398341119289398, 0.015552988275885582, 0.0012963939225301147, -0.017018558457493782, -0.011021684855222702, 0.025303514674305916, -0.016315681859850883, -0.020278703421354294, 0.020413296297192574, -0.004830399062484503, -0.00876351073384285, 0.011545103043317795, 0.010356196202337742, -0.01390796061605215, 0.006247366312891245, -0.01769152469933033, 0.02439127303659916, -0.0027647679671645164, -0.012427435256540775, -0.024705322459340096, -0.006621235981583595, -0.0437876433134079, -0.03284073248505592, -0.022417239844799042, 0.038134731352329254, -0.0035723268520087004, 0.00360223650932312, 0.029565632343292236, 0.037267353385686874, -0.040198493748903275, 0.03732717037200928, -0.002637652214616537, -0.007709197234362364, -0.01221059076488018, -0.008352253586053848, -0.05308952555060387, 0.005114540457725525, -0.011881585232913494, 0.031105976551771164, 0.010655292309820652, 0.009974849410355091, -0.02222282625734806, -0.0027012100908905268, -0.01429678499698639, -0.0299245472997427, 0.0032937938813120127, 0.007806403562426567, 0.009324315004050732, 0.009451431222259998, -0.01022908091545105, 0.008950445801019669, 0.0033031406346708536, -0.015552988275885582, -0.046329960227012634, -0.022013459354639053, -0.01159744430333376, 0.02596152573823929, 0.010782408528029919, 0.005279043223708868, 0.01416219212114811, -9.656358452048153e-05, 0.029266536235809326, -0.009772959165275097, 0.0021347971633076668, -0.0004355584387667477, -0.008382163010537624, 0.011649786494672298, -0.004127523861825466, -0.008875671774148941, 0.04857317730784416, -0.013459316454827785, 0.022865884006023407, -0.02599143609404564, 0.011103936471045017, -0.0010075793834403157, 0.01176194753497839, 0.008845762349665165, 0.0013889266410842538, -0.000853358069434762, 0.010550608858466148, -0.003243321320042014, -0.018080348148941994, -0.0013188261073082685, 0.046898242086172104, -0.0005608048522844911, -0.008591530844569206, 0.02089185081422329, 0.01565767079591751, -0.004561212845146656, -0.00041663128649815917, -0.0012870471691712737, 0.006651145406067371, 0.03149480000138283, 0.02768132835626602, -0.03601114824414253, 0.008980355225503445, 0.012472299858927727, 0.007200734224170446, -0.014334172010421753, 0.00689789978787303, -0.007671810686588287, 0.0017553191864863038, 0.008337299339473248, -0.01633063703775406, -0.02525865100324154, -0.0025310993660241365, 0.031195705756545067, 0.0047182380221784115, -0.007021276745945215, 0.016734417527914047, -0.019516009837388992, 0.0069315480068326, 0.0011954490328207612, -0.0038695535622537136, 0.00266569247469306, -0.0007893328438512981, 0.0131303109228611, -0.0036583170294761658, 0.013578955084085464, 0.01264427974820137, -0.0182598065584898, -0.011634831316769123, 0.002630174858495593, 0.021983550861477852, 0.014999660663306713, 0.0013188261073082685, -0.017616750672459602, -0.004953776020556688, -0.004392971284687519, 0.03299028053879738, 0.017078377306461334, -0.016883965581655502, -0.0064156074076890945, 0.018304670229554176, -0.03660934045910835, 0.015538033097982407, -0.0029984365683048964, 0.008980355225503445, 0.0234192106872797, 0.00012022253940813243, -0.019545918330550194, 0.013167697936296463, 0.023105159401893616, 0.013160220347344875, -0.02575215883553028, -0.02312011457979679, 0.018274761736392975, -0.00023635587422177196, -0.01842430979013443, -0.009421521797776222, -0.0020151587668806314, -0.010303854942321777, -0.01477533858269453, 0.02569233998656273, 0.02629053220152855, 0.012464822269976139, -0.010625382885336876, -0.005649174097925425, -0.004407925996929407, -0.015478214249014854, -0.002220787340775132, 0.0029666577465832233, 0.018932772800326347, 0.02132553979754448, -0.00717082479968667, 0.005110801663249731, -0.041364967823028564, -0.0061838082037866116, 0.003224628046154976, 0.02262660674750805, 0.017811162397265434, 0.004228468518704176, 0.008187751285731792, -0.007062402553856373, -0.004000408109277487, 0.02789069525897503, 0.04345863685011864, 0.011567534878849983, -0.0048042284324765205, -0.028144925832748413, -0.030896609649062157, -0.010326286777853966, 0.05389708653092384, -0.031076066195964813, -0.01736251823604107, -0.018514037132263184, -0.024181906133890152, -0.0035498947836458683, -0.03598123788833618, 0.02455577626824379, 0.0013889266410842538, -0.0019534702878445387, -0.013721025548875332, 0.04615050181746483, -0.01429678499698639, -0.019441235810518265, 0.00869621429592371, 0.011784379370510578, 0.017945755273103714, 0.0168241448700428, -0.0009823432192206383, 0.013713547959923744, -0.013212562538683414, -0.002149751875549555, 0.0027591600082814693, -0.01440894603729248, 0.04704779013991356, 0.08679763972759247, 0.009122425690293312, -0.026903677731752396, -0.010894568637013435, 0.00862891785800457, -0.0012767657171934843, 0.005806199740618467, -0.009705662727355957, 0.025034328922629356, -0.011156277731060982, -0.002506797667592764, 0.0050210729241371155, 0.010049623437225819, -0.0023460336960852146, -0.024271633476018906, 0.011948882602155209, -0.025468017905950546, 0.025243695825338364, -0.007432533893734217, -0.008584053255617619, -0.01739242859184742, -0.021250765770673752, 0.01685405522584915, 0.0002053947828244418, 0.03475494682788849, -0.02292570285499096, -0.013848140835762024, 0.018648630008101463, 0.0029348786920309067, -0.0175270214676857, 0.02056284435093403, 0.0023535110522061586, -0.021579770371317863, -0.008374685421586037, -0.003628407372161746, 0.008142886683344841, -0.020308613777160645, 0.003454557852819562, 0.006542723160237074, 0.034934405237436295, 0.011313303373754025, -0.0075820814818143845, 0.016839100047945976, 0.01139555498957634, 0.019815104082226753, -0.004916389472782612, -0.004893957171589136, -0.006187546998262405, 0.01123105175793171, -0.00901026464998722, -0.01482768077403307, 0.003409693483263254, 0.0027722453232854605, 0.016136225312948227, 0.010371151380240917, -0.002987220650538802, 0.002495581516996026, -0.007544694468379021, -0.013885527849197388, -0.005260349716991186, -0.007006322033703327, -0.023404255509376526, -0.021968595683574677, -0.02151995152235031, -0.056858133524656296, -0.001497349003329873, 0.013332201167941093, 0.010573040693998337, -0.007679287809878588, -0.0351736806333065, 0.004845353774726391, -0.02422676980495453, 0.010356196202337742, -0.04070695489645004, -0.011320780962705612, -0.00903269648551941, -0.007574604358524084, 0.027202773839235306, 0.016540003940463066, -0.004553735256195068, -0.0008327952236868441, -0.0036620558239519596, -0.00563795818015933, -0.03765617683529854, 0.008576575666666031, 0.00023974408395588398, 0.013347155414521694, 0.016450276598334312, 0.019276732578873634, -0.01689891889691353, 0.03634015470743179, -0.018185032531619072, 0.024406228214502335, -0.002860104665160179, 0.01196383684873581, -0.0007285790052264929, -0.004026578739285469, 0.0012935898266732693, 0.01915709301829338, -0.006310923956334591, 0.00866630394011736, -0.00249932031147182, 0.01772143319249153, 0.002149751875549555, 0.018304670229554176, -0.0142743531614542, -0.025797024369239807, -0.02531846985220909, -0.001329107559286058, -0.01676432602107525, 0.018618721514940262, -0.014551016502082348, 0.019381415098905563, 0.019650602713227272, -0.05261097475886345, 0.002637652214616537, -0.005581877660006285, 0.016614777967333794, -0.014551016502082348, 0.025034328922629356, -0.02092175930738449, -0.005742641631513834, -0.018872952088713646, 0.01017673872411251, 0.023209843784570694, 0.002506797667592764, -0.006325878668576479, -0.005940792616456747, 0.02235742099583149, -0.011612399481236935, -0.0031236831564456224, 0.02442118152976036, -0.012502209283411503, 0.008015770465135574, 0.023508939892053604, 0.00024558580480515957, -0.028578614816069603, 0.012218068353831768, -0.0020600231364369392, 0.0014814594760537148, 0.0006402521976269782, -0.04061722755432129, -0.02895248495042324, -0.005122017581015825, -0.006953980308026075, -0.008374685421586037, -0.0004883675719611347, 0.04770579934120178, -0.015538033097982407, 0.012367616407573223, -0.012225545942783356, 0.006494120229035616, 0.007081096060574055, 0.010333764366805553, -0.021878866478800774, -0.02295561134815216, -0.001149649964645505, 0.002321731997653842, 0.014678132720291615, 0.019246822223067284, 0.00937665719538927, 0.013025627471506596, -0.004508871119469404, -0.013018149882555008, -0.01831962540745735, 0.018813133239746094, -0.007585820276290178, 0.0255278367549181, 0.007903609424829483, 0.021908776834607124, 0.014379036612808704, 0.003981714602559805, 0.00380225689150393, -0.016779281198978424, 0.03382774814963341, 0.01912718452513218, -0.014977228827774525, 0.008127931505441666, 0.037835635244846344, 0.009952416643500328, -0.009832778945565224, -0.003626537974923849, -0.00906260684132576, -0.012225545942783356, -0.02472027763724327, -0.0029124466236680746, -0.00011484816059237346, 0.009728094562888145, -0.04495411738753319, -0.013721025548875332, -0.02365848794579506, 0.017302699387073517, -0.004890218377113342, -0.00017513468628749251, 0.016405411064624786, 0.03044796548783779, 0.0084120724350214, 0.012016179040074348, -0.03592142090201378, -0.026111073791980743, 0.03807491064071655, 0.0005977245164103806, -0.01102916244417429, -0.005735164508223534, 0.0009323380654677749, -0.010685201734304428, -0.0069315480068326, -0.00937665719538927, -0.0014954796060919762, -0.017870981246232986, 0.0020338522735983133, -0.00656515546143055, -0.005473455414175987, -0.0285487063229084, 0.0017459724331274629, 0.029176807031035423, 0.017437292262911797, -0.024615595117211342, 0.011634831316769123, -0.01147780567407608, -0.0025124058593064547, 0.022073280066251755, 0.0035068998113274574, 0.0011879716766998172, -0.012599416077136993, -0.010573040693998337, -0.000624830077867955, 0.016943784430623055, 0.007025015540421009, 0.02815988101065159, -0.0035592415370047092, 0.0005617395509034395, -0.004658419173210859, -0.028728162869811058, -0.021609680727124214, -0.0322425402700901, -0.013354633003473282, 0.0182598065584898, -0.017601795494556427, 0.003205934539437294, 0.009421521797776222, 0.026664402335882187, 0.014603358693420887, 0.016839100047945976, -0.019785195589065552, 0.06460472196340561, 0.01882808841764927, 0.003063863841816783, -0.0029292707331478596, -0.0036340155638754368, -0.006408130284398794, 0.003439603140577674, -0.02455577626824379, -0.007312895264476538, 0.0002377578930463642, -0.02425668016076088, -0.000853358069434762, 0.006699748802930117, 0.0189925916492939, 0.021370403468608856, 0.010423492640256882, 0.01562776230275631, 0.0034657740034163, 0.0050360276363790035, -0.008404595777392387, -0.0017347563989460468, -0.0029629189521074295, -0.009526205249130726, 0.011507716029882431, 0.0008290564874187112, -0.008606485091149807, -0.023105159401893616, -0.004766841419041157, 0.004923866596072912, -0.022940658032894135, 0.006628713570535183, -0.00010801335156429559, -0.006157637108117342, 0.0011160016292706132, -0.005611787084490061, 0.022312555462121964, -0.016106314957141876, 0.011260961182415485, -0.009421521797776222, 0.029236625880002975, 0.00696519622579217, -0.0035162465646862984, 0.016734417527914047, 0.004101352766156197, 0.011216097511351109, -0.005327646154910326, 0.006991367321461439, 0.011687173508107662, -0.029730135574936867, -0.007585820276290178, 0.0030077833216637373, -0.013870573602616787, 0.0349942222237587, 0.006572633050382137, -0.014984705485403538, 0.0278159212321043, 0.026544762775301933, -0.010954388417303562, 0.005159404594451189, 0.003071341197937727, 0.02694854326546192, -0.00543232960626483, 2.266586307086982e-05, -0.033438924700021744, 0.013960301876068115, -0.003992930520325899, 0.009937462396919727, -0.02832438424229622, 0.005522058345377445, -0.02938617393374443, -0.005447284318506718, -7.816217839717865e-05, -0.0067184423096477985, -0.006239888723939657, 0.006987628526985645, -0.008471892215311527, -0.007970906794071198, -0.00653150724247098, -0.00015398766845464706, -0.001799248973838985, 0.025542791932821274, -0.0018758922815322876, -0.00957854650914669, 0.03248181566596031, -0.00864387210458517, 0.026410169899463654, 0.03427639231085777, 0.009458908811211586, -0.010573040693998337, -0.013803277164697647, -0.022148054093122482, 0.020233839750289917, -0.0023385563399642706, -0.013601386919617653, -0.013691116124391556, 0.008972877636551857, -0.014334172010421753, -0.011193664744496346, 0.006041737738996744, 0.01336211059242487, 0.0038807697128504515, 0.02382299117743969, -0.00877846498042345, -0.00942899938672781, 0.005514581222087145, -0.009653320536017418, -0.017706478014588356, -0.014984705485403538, -0.044595200568437576, -0.012599416077136993, 0.007215689402073622, 0.0005967898177914321, 0.009488818235695362, -0.007608252577483654, -0.006912854500114918, -0.010169261135160923, 0.017945755273103714, -0.006086601875722408, 0.00816531851887703, 0.015538033097982407, -0.0019020631443709135, 0.016913874074816704, -0.008038203231990337, -0.015582897700369358, -0.024914691224694252, 0.0025797022972255945, 0.03188362717628479, -0.023703351616859436, 0.014304262585937977, -0.02815988101065159, 0.034336213022470474, -0.016375502571463585, -0.017571885138750076, 0.015171640552580357, 0.0033106179907917976, -0.008868194185197353, 0.009571069851517677, 0.007813881151378155, 0.022731291130185127, 0.0067819999530911446, -0.0022020938340574503, 0.024301543831825256, 0.026365306228399277, 0.02786078490316868, 0.013055536895990372, 0.02545306272804737, -0.0190225001424551, 0.02295561134815216, 0.002116103656589985, -0.022073280066251755, -0.00862891785800457, -0.012285364791750908, -0.008441982790827751, 0.017272789031267166, 0.01625586301088333, 0.0013917307369410992, -0.017407381907105446, -0.006150159984827042, 0.013758412562310696, -0.02629053220152855, -0.0004883675719611347, -0.029206717386841774, -7.269286925293272e-06, -0.02089185081422329, 0.0060342601500451565, -0.0015123037155717611, -0.00530521385371685, 0.01595676690340042, 0.014872544445097446, 0.01932159624993801, 0.0307470615953207, -0.0056716063991189, -0.0240622665733099, 0.015201549977064133, -0.01176194753497839, -0.012547073885798454, -0.006135205272585154, 0.004213513806462288, 0.00021976541029289365, 0.0064156074076890945, 0.01836448907852173, 0.024600639939308167, 0.020532935857772827, 0.008815851993858814, 0.01568758115172386, -0.02181904762983322, 0.010670247487723827, -0.006957719102501869, -0.03337910398840904, -0.015567942522466183, 0.018947726115584373, 0.008868194185197353, 0.02705322578549385, 0.005925837904214859, 0.02075725793838501, 0.0001379813620587811, -0.0077316295355558395, 0.0005308952531777322, 0.01929168775677681, -0.029012303799390793, -0.01733260788023472, -0.03349874168634415, -0.01758684031665325, 0.006524029653519392, -0.010924478992819786, -0.023808035999536514, -0.003777955425903201, 0.014506151899695396, -0.0006281014648266137, 0.01633063703775406, -0.009915029630064964, -0.011754469946026802, -0.036579430103302, -0.010273944586515427, 0.008456937037408352, -0.009159812703728676, -0.008255047723650932, -0.04695805907249451, -0.019949698820710182, 0.017766298726201057, -0.006146421190351248, 0.015672625973820686, -0.021774183958768845, -0.0002446511061862111, 0.023374347016215324, -0.0014393990859389305, 0.011948882602155209, -0.0001572590263094753, 0.011066549457609653, 0.005761335138231516, 0.004194820299744606, -0.024166950955986977, 0.009915029630064964, 0.007114744279533625, 0.008112977258861065, -0.010999253019690514, 0.015493168495595455, 0.00024044507881626487, 0.001053378451615572, 0.002467541489750147, -0.018678540363907814, 0.012434912845492363, -0.008359731175005436, 0.015044525265693665, -0.004935082979500294, 0.003248929511755705, -0.004710760898888111, -0.03239208832383156, 0.028877710923552513, 0.029460947960615158, -0.0011337604373693466, -0.01519407331943512, -0.02596152573823929, -0.014887499623000622, 0.005200530402362347, -0.0013076099567115307, 0.0098776426166296, 0.010333764366805553, 0.008195227943360806, -0.017437292262911797, 0.004927605390548706, 0.00677078403532505, -0.00896540004760027, -0.0003981714544352144, 0.005469716619700193, 0.015231460332870483, -0.03631024435162544, -0.005989396013319492, -0.008494324050843716, 0.01926177740097046, -0.018110258504748344, 0.02222282625734806, -0.007940996438264847, 0.009458908811211586, -0.002813371131196618, -0.007926042191684246, -0.008105499669909477, -0.03502413257956505, -0.00919719971716404, 0.0004491112194955349, 0.01646522991359234, 0.01719801500439644, 0.00826252531260252, -0.012083475477993488, 0.008053157478570938, -0.00963088870048523, 0.02762150950729847, 0.003088165307417512, -0.004542519338428974, -0.014438855461776257, -0.0004778524744324386, -0.00010848068632185459, 0.010617905296385288, -0.0006753025227226317, -0.0010776800336316228, -0.01769152469933033, 0.005540751852095127, 0.01918700337409973, 0.01090952381491661, 0.012068520300090313, -0.013459316454827785, 0.008255047723650932, 0.0219237320125103, -0.0065838489681482315, 0.030133914202451706, -0.005578138865530491, -0.0019702943973243237, 0.005510842427611351, 0.0073988852091133595, 0.00979539193212986, 0.0024058527778834105, 0.0066324518993496895, -0.011133845895528793, 0.02189382165670395, -0.00459859985858202, -0.02392767369747162, -0.009645843878388405, 0.018843043595552444, -0.008441982790827751, 0.018244851380586624, -0.010954388417303562, -0.00944395363330841, -0.009227109141647816, -0.01858881115913391, -0.010393583215773106, -0.00570525461807847, -0.00981034617871046, -0.029505813494324684, 0.01758684031665325, 0.009152335114777088, -0.0003427919582463801, -0.014446333050727844, 0.008000816218554974, -0.001919821952469647, 0.012532119639217854, 0.0011552580399438739, 0.023030385375022888, -0.013339677825570107, -0.004493916407227516, -0.003063863841816783, -0.007791448850184679, -0.010453402996063232, -0.0007669006590731442, -9.825768211157992e-05, 0.0062174564227461815, -0.006041737738996744, 0.010378628969192505, 0.0008799962815828621, 0.012479777447879314, -0.010827272199094296, -0.014835157431662083, 0.010849704965949059, 0.027875740081071854, 0.004220991395413876, 0.006389436777681112, -0.010386105626821518, -0.01858881115913391, 0.0091448575258255, 0.005540751852095127, -0.02442118152976036, 0.024675413966178894, -0.015433349646627903, 0.013115356676280499, 0.0033274421002715826, -0.00832982175052166, 0.004755625035613775, -0.01565767079591751, 0.017467202618718147, 0.02072734758257866, -0.0006869859644211829, 0.011470329016447067, 0.00366579438559711, 0.0029610495548695326, -0.007159608881920576, 0.009982326067984104, 0.00907756108790636, 0.00151791179087013, -0.001170212752185762, 0.003970498219132423, 0.01184419821947813, -0.012337706983089447, 0.02129562944173813, 0.004493916407227516, -0.021235810592770576, -0.009840255603194237, 0.006475426722317934, -0.007028754334896803, -0.0011814289027824998, -0.007010060828179121, -0.0038770309183746576, -0.019800150766968727, 0.00044537251233123243, -0.020532935857772827, -0.010700156912207603, 0.024615595117211342, -0.020413296297192574, 0.0017487765289843082, 0.01695873960852623, 0.008770988322794437, 0.005955747794359922, 0.00877846498042345, -0.005215485114604235, -0.01852899231016636, -0.012988240458071232, 0.01490993145853281, -0.007903609424829483, 0.02439127303659916, -0.005922099109739065, -0.013825709000229836, -0.00727176945656538, 0.008636394515633583, 0.001138433814048767, 0.010819794610142708, -0.016435321420431137, 0.005929576698690653, 0.000330407521687448, 0.02149004302918911, 0.0019964652601629496, 0.0006935286801308393, 0.01129834819585085, -0.021848957985639572, 0.0036714025773108006, 0.0042060362175107, -0.002306777285411954, -0.0032956632785499096, 0.020248794928193092, 0.005391204264014959, 0.013982734642922878, 0.01565767079591751, -0.0042546396143734455, 0.0034657740034163, 0.008636394515633583, 0.025662429630756378, -0.025647476315498352, 0.025273606181144714, -0.009870165959000587, -0.025767114013433456, -0.020144110545516014, 0.00015994621207937598, -0.0036601864267140627, -0.004456529393792152, -0.019785195589065552, -0.00686425156891346, -0.004478961694985628, -0.0021385359577834606, 7.553340401500463e-05, -0.007372714579105377, 0.016599824652075768, 0.013556522317230701, 0.007671810686588287, 0.001342192990705371, 0.01991978846490383, -0.018932772800326347, -0.005626742262393236, -0.024540821090340614, -0.016181088984012604, 0.00506593706086278, -0.009675753302872181, -0.005308952648192644, -0.0019796411506831646, -0.006688532419502735, 0.04082659259438515, 0.0031629393342882395, 0.021998506039381027, 0.0009281320380978286, 0.0025815716944634914, -0.009488818235695362, 0.003607844701036811, 0.012180681340396404, -0.018484128639101982, -0.018244851380586624, -0.005439807195216417, 0.006684794090688229, 0.01384066417813301, -0.007447488605976105, -0.005181836895644665, 0.008023248054087162, -0.006894160993397236, -0.005073414649814367, -0.00650533614680171, 0.0024021142162382603, 0.007443749811500311, 0.0024245462846010923, -0.0019964652601629496, 0.008494324050843716, 0.028414113447070122, 0.019276732578873634, -0.02768132835626602, -0.0036975734401494265, -0.007985861040651798, 0.006696010008454323, -0.0075185238383710384, -0.007410101592540741, 0.013668683357536793, 0.0018702842062339187, 0.012891034595668316, -0.010789885185658932, 0.006647407077252865, 0.007626946084201336, 0.016345592215657234, -0.0043518454767763615, 0.011223574168980122, -0.0030171300750225782, 0.003080687951296568, 0.0048042284324765205, -0.011380599811673164, 0.0037349604535847902, 0.0013506050454452634, -0.0045350417494773865, -0.00637074327096343, 0.0040863980539143085, 0.005350078456103802, 0.019785195589065552, -0.001704846741631627, 0.018813133239746094, 0.009713140316307545, 0.014939841814339161, -0.007918564602732658, -0.001640354166738689, 0.0068006934598088264, 0.004224729724228382, -0.01685405522584915, -0.002288083778694272, 0.003820950398221612, 0.005178098101168871, 0.005694038700312376, 0.0023722045589238405, -0.017482155933976173, 0.013668683357536793, 0.0011318911565467715, -0.0067184423096477985, 0.0011618006974458694, -0.00821018312126398, -0.00877846498042345, 0.014767860993742943, 0.01233022939413786, 0.012891034595668316, -0.0059370542876422405, 0.01852899231016636, -0.011687173508107662, 0.002682516584172845, -0.026843858882784843, 0.01226293295621872, 0.003673271741718054, 0.00683060334995389, -0.022207872942090034, -0.015672625973820686, -0.011971314437687397, -0.0067670452408492565, -0.031195705756545067, 0.0074998303316533566, 0.00023086466535460204, 0.029371220618486404, -0.008928013034164906, 0.008202705532312393, -0.010655292309820652, 0.0070848348550498486, 0.005510842427611351, -0.0037237443029880524, 0.004213513806462288, -0.024211814627051353, 0.0016926960088312626, -0.0027722453232854605, -0.010924478992819786, -0.008202705532312393, -0.006849296856671572, 0.004321936052292585, -3.256757554481737e-05, 0.0025722249411046505, 0.01440894603729248, 0.013414451852440834, 0.004748147912323475, -0.015448304824531078, 0.03382774814963341, -0.0031498540192842484, 0.00763816200196743, 0.0016478316392749548, 0.014880022034049034, 0.005391204264014959, 0.018543947488069534, 0.015463259071111679, 0.0023441642988473177, 0.005346339661628008, -0.010154306888580322, 0.01806539297103882, 0.025348380208015442, -0.00606417004019022, 0.006651145406067371, -0.012233023531734943, -0.017003603279590607, 0.03071715123951435, -0.011604921892285347, 0.021131126210093498, 0.01728774420917034, -0.005888450890779495, -0.03146488964557648, 0.011589966714382172, -0.0003794779477175325, 0.020039426162838936, -0.024436136707663536, -0.018618721514940262, -0.0006612823926843703, 0.01031133159995079, 0.00168428395409137, 0.011739514768123627, -0.011380599811673164, 0.008726123720407486, 0.01482768077403307, 0.004041533451527357, -0.0225966963917017, 0.03469512611627579, 0.02431649900972843, 0.017706478014588356, -0.017183061689138412, -0.02129562944173813, 0.016674598678946495, 0.023045340552926064, -0.01942628063261509, -0.005529535934329033, 0.008718646131455898, -0.010580518282949924, 0.02232751064002514, 0.007798926439136267, -0.017706478014588356, 0.004617293365299702, -0.00784379057586193, -0.022043369710445404, -0.005899667274206877, -0.0145958811044693, -0.012524642050266266, -0.002996567403897643, 0.0019478622125461698, 0.007380191702395678, 0.011604921892285347, -0.006939025595784187, -0.011515192687511444, -0.019904833287000656, 0.006561416666954756, 0.00906260684132576, 0.0066324518993496895, 0.014169669710099697, -0.003647100878879428, -0.008434505201876163, -0.007443749811500311, 0.009159812703728676, -0.021235810592770576, 0.007391408085823059, 0.001636615488678217, -0.008554143831133842, 0.028773028403520584, 0.0109319556504488, 0.002314254641532898, -0.01379579957574606, 0.007163347210735083, -0.00976548157632351, 0.0002031749318121001, -0.009847733192145824, -0.004733193200081587, 0.009503773413598537, 0.005480933003127575, -0.033169738948345184, 0.012240501120686531, 0.018274761736392975, 0.006176330614835024, 0.02246210351586342, 0.004553735256195068, 0.03885256126523018, 0.01484263502061367, -0.0009028958156704903, -0.005159404594451189, -0.008112977258861065, -0.00869621429592371, -0.00048042283742688596, 0.006157637108117342, 0.015612807124853134, -0.0017226055497303605, -0.024032358080148697, 0.001799248973838985, -0.029431039467453957, 0.007940996438264847, -0.009915029630064964, 0.01855890266597271, 0.02425668016076088, -0.0036994426045566797, -0.01695873960852623, 0.0366392508149147, -0.02072734758257866, -0.03221262991428375, 0.031076066195964813, 0.01164230890572071, 0.0018852390348911285, -0.0021273198071867228, -0.01177690178155899, -0.009548637084662914, 0.013354633003473282, -0.018274761736392975, 0.018813133239746094, -0.010879614390432835, 0.017571885138750076, 0.009414044208824635, 0.007926042191684246, -0.0011356298346072435, -0.02895248495042324, -0.011851675808429718, -0.002437631832435727, 0.011148800142109394, -0.03831418603658676, -0.005929576698690653, -0.0009285993874073029, -0.006662361789494753, 0.014610836282372475, 0.022073280066251755, -0.004041533451527357, 0.01639045588672161, 0.03619060665369034, 0.00019698271353263408, 0.002706818049773574, -0.00823261495679617, 0.008479369804263115, -0.0017525152070447803, 0.0021478827111423016, 0.008427027612924576, -0.0031442458275705576, -0.02678404003381729, -0.022013459354639053, 0.0025310993660241365, 0.014722997322678566, 0.010087010450661182, 0.012838692404329777, -0.014954796060919762, -0.00161605270113796, 0.006524029653519392, 0.018349535763263702, -0.02309020608663559, 0.0005757596809417009, -0.0074848756194114685, 0.005544490646570921, 0.011186187155544758, 0.0051893144845962524, -0.00994493905454874, -0.0019329073838889599, 0.013122833333909512, 0.007522262632846832, -0.006363265682011843, 0.031704168766736984, -0.007555910851806402, -0.0017964448779821396, 0.0026862553786486387, -0.0010945041431114078, -0.003419040236622095, 0.012203114107251167, -0.009772959165275097, 0.01109645888209343, 0.008539188653230667, 0.023598669096827507, 0.001908605918288231, 0.009309360757470131, 0.009780436754226685, 0.01579226553440094, -0.011866630986332893, 0.002054415177553892, 0.02694854326546192, 0.013623819686472416, 0.014259397983551025, -8.143353625200689e-05, 0.026574673131108284, -0.011403031647205353, 0.004310720134526491, -0.007645639590919018, 0.004277071915566921, 0.012801305390894413, 0.017138196155428886, 0.020338522270321846, 0.008023248054087162, 0.020846985280513763, -0.019680511206388474, 0.0040564886294305325, -0.003243321320042014, -0.014685610309243202, -0.004306981340050697, 0.02498946525156498, 0.008225138299167156, 0.037536539137363434, -0.005551968235522509, -0.02205832488834858, -0.039032019674777985, -0.00047364644706249237, -0.02135544829070568, 0.010072055272758007, -0.002082455437630415, 0.0042546396143734455, -0.0035405480302870274, -0.0024058527778834105, 0.004527564626187086, -0.012838692404329777, 0.016240907832980156, 0.019605737179517746, 0.019366461783647537, 0.0016983040841296315, -0.01766161434352398, 0.029595540836453438, 0.01484263502061367, 0.024495955556631088, -0.019471144303679466, 0.028533751145005226, 0.011874108575284481, 0.005925837904214859, 0.021235810592770576, 0.01526136975735426, -0.003011522116139531, -0.012180681340396404, 0.000904297805391252, 0.0013954694150015712, -0.002525491174310446, -0.0039032017812132835, 0.012659234926104546, -0.02575215883553028, -0.0024058527778834105, -0.018917817622423172, -0.01736251823604107, 0.0026339134201407433, -0.017153151333332062, 0.01116375532001257, -0.018439263105392456, 0.007477398030459881, 0.015941813588142395, 0.005473455414175987, -0.01839439943432808, -0.0011917103547602892, 0.02095166966319084, 0.004407925996929407, 0.027636462822556496, -0.012838692404329777, -0.01416219212114811, -0.00832982175052166, 0.02895248495042324, 0.004063965752720833, -0.002074978081509471, 0.0037667392753064632, 0.03128543496131897, 0.017347563058137894, -0.0001502489612903446, 0.009219631552696228, 0.01882808841764927, -0.009451431222259998, -0.010184216313064098, -0.013055536895990372, 0.007813881151378155, -0.015403440222144127, 0.010670247487723827, 0.01568758115172386, -0.007746584247797728, -0.002708687447011471, -0.002671300433576107, 0.011821766383945942, 0.021475087851285934, 0.0006042672321200371, 0.006677316501736641, -0.01054313126951456, -0.008075590245425701, 0.008972877636551857, 0.011223574168980122, -0.0021590986289083958, 0.013638773933053017, 0.022432195022702217, -0.007615730166435242, -0.009017742238938808, 0.005495887715369463, 0.020667528733611107, 0.013855618424713612, 0.005821154452860355, -0.012502209283411503, -0.005013595335185528, 0.023598669096827507, 0.022043369710445404, -0.038403917104005814, 0.01872340403497219, -0.01104411669075489, 0.013773367740213871, -0.020084291696548462, -0.004475222900509834, 0.02143022231757641, -0.01736251823604107, 0.010954388417303562, -0.001519781188108027, 0.0005023877019993961, 0.0014263136545196176, -0.014438855461776257, -0.003654578235000372, -0.009271973744034767, -0.0013263034634292126, 0.0284589771181345, -0.0023105160798877478, 0.009518727660179138, -0.0011262830812484026, -0.002860104665160179, -0.015104344114661217, -0.01169465109705925, 0.017183061689138412, 0.021744273602962494, -0.004770580213516951, 0.038164641708135605, -0.009040174074470997, -0.0091299032792449, 0.004161172080785036, 0.0017983142752200365, 0.012255455367267132, -0.003600367112085223, -0.006524029653519392, -0.0026339134201407433, -0.002117973053827882, -0.006041737738996744, 0.000202473922399804, -0.0036844878923147917, 0.00871116854250431, -0.0029180545825511217, -0.00686425156891346, -0.03284073248505592, 0.009735572151839733, 0.006090340670198202, 0.006961457431316376, -0.000949629582464695, 0.014506151899695396, -0.0031423766631633043, -0.00131789140868932, -0.011791856959462166, 0.022611651569604874, -0.0063856979832053185, 0.018573855981230736, 0.002675039228051901, 0.01639045588672161, -0.009294405579566956, -0.013399497605860233, 0.03451567143201828, -0.0014795901952311397, 0.010984297841787338, -0.009107470512390137, -0.00036101811565458775, 0.01249473262578249, -0.020039426162838936, 0.010632860474288464, 0.014199579134583473, -0.0025030591059476137, -0.024481002241373062, 0.015941813588142395, 0.0023292095866054296, 0.00786622241139412, -0.021564817056059837, 0.006019305437803268, -0.01636054739356041, 0.014625790528953075, -0.0007781167514622211, -0.003377914661541581, -0.012285364791750908, 0.011933927424252033, 0.007122221868485212, -0.019351506605744362, 0.01177690178155899, -0.007241860032081604, 0.022835973650217056, 0.00503228884190321, -0.014767860993742943, 0.009040174074470997, -0.001023468910716474, -0.006961457431316376, -0.012517164461314678, -0.005379987880587578, 0.0029554415959864855, 0.002471280051395297, 0.0067819999530911446, -0.01698864810168743, 0.0036994426045566797, -0.0006028652423992753, 0.020368432626128197, 0.0321827232837677, 0.0015571682015433908, -0.017153151333332062, 0.02075725793838501, -0.008135409094393253, -0.01912718452513218, 0.009541160427033901, 0.008688736706972122, -0.006221195217221975, 0.005884712096303701, -0.02172931842505932, -0.012120862491428852, -0.00487152487039566, -0.013287336565554142, 0.013758412562310696, 0.011081503704190254, 0.005002379417419434, -0.005286520346999168, -0.010602950118482113, -0.007851268164813519, -0.007888655178248882, -0.005439807195216417, 0.0168241448700428, 0.00196655560284853, -0.023269662633538246, -0.006785738747566938, 0.003819081000983715, -0.007507307920604944, -7.63074331189273e-06, 0.025004418566823006, 0.004464006517082453, -0.00869621429592371, 0.009279451332986355, 0.01991978846490383, 0.0076643330976367, 0.009533682838082314, -0.006673577707260847, 0.003630276769399643, 0.0013982733944430947, 0.003970498219132423, -0.006909115705639124, -0.021370403468608856, 0.005443545989692211, 0.002506797667592764, -0.009040174074470997, -0.007122221868485212, -0.023239754140377045, -0.007223166525363922, -0.046359866857528687, 0.0030077833216637373, -0.011702127754688263, -0.02102644369006157, -0.018902862444519997, 0.01926177740097046, 0.00677078403532505, -0.012173203751444817, 0.012285364791750908, 0.02955067716538906, 0.025767114013433456, -0.005376249086111784, 0.012464822269976139, 0.008509279228746891, 0.013788321986794472, -0.01668955199420452, 0.0016244647558778524, -0.005589355248957872, -0.008546666242182255, 0.012337706983089447, -0.003744307206943631, 0.007077357266098261, 0.007275508251041174, -0.006348310969769955, -0.009167290292680264, -0.005507103633135557, 0.009548637084662914, 0.0013468663673847914, -0.02422676980495453, 0.023314528167247772, 0.019366461783647537, -0.002142274519428611, -0.005114540457725525, 0.00791108701378107, 0.009391612373292446, 0.0073652369901537895, 0.012150771915912628, -0.006322139874100685, -0.00426585553213954, -0.025841888040304184, -0.011193664744496346, -0.021340494975447655, -0.00579872215166688, -0.007473659235984087, 0.0069352868013083935, 0.001602967269718647, 0.008823329582810402, -0.004142478574067354, -0.006157637108117342, 0.00786622241139412, 0.012479777447879314, -0.012778873555362225, 0.027561688795685768, -0.009593501687049866, -0.004609815776348114, -0.002286214381456375, 0.02691863290965557, -0.015164162963628769, -0.002458194736391306, -0.021445177495479584, 0.011335735209286213, 0.003456427250057459, -0.003088165307417512, -0.026709266006946564, 0.02018897421658039, -0.00383216654881835, -0.00113282585516572, -0.003426517592743039, 0.02436136268079281, 0.017841072753071785, 0.0017039120430126786, -0.019037455320358276, -0.009586024098098278, -0.004348107147961855, -0.011373122222721577, 0.0056716063991189, 0.013048059307038784, -0.005159404594451189, -0.009571069851517677, -0.014700564555823803, -0.006856773979961872, 0.0009388807811774313, -0.007615730166435242, 0.02143022231757641, 0.014648223295807838, 0.025737203657627106, 0.004277071915566921, 0.0033311808947473764, -0.026380261406302452, -0.01393039245158434, -0.03014886938035488, -0.003925634082406759, -0.004942560102790594, 0.0017095201183110476, -0.026170892640948296, -0.01799061894416809, 0.0021366665605455637, 0.022237781435251236, -0.008038203231990337, -0.00309938145801425, 0.0029124466236680746, 0.011926449835300446, -0.009219631552696228, -0.029939502477645874, -0.005551968235522509, -0.010505744256079197, -0.0023273401893675327, -0.015941813588142395, 0.023882810026407242, 0.0034657740034163, -0.002867582254111767, -0.005095846951007843, 0.015037047676742077, 0.016151180490851402, -0.0053725107572972775, 0.023344436660408974, 0.00907756108790636, 0.023314528167247772, -0.001423509675078094, -0.01221059076488018, -0.007507307920604944, -0.00410509156063199, -0.01066276989877224, -0.017153151333332062, 0.023179933428764343, 0.027980424463748932, 0.009518727660179138, -0.009832778945565224, -0.0014393990859389305, -0.018947726115584373, 0.026903677731752396, -0.0022338726557791233, -0.022447148337960243, -0.032122902572155, 0.0027703759260475636, -0.0016356807900592685, 0.007099789567291737, 0.008845762349665165, -0.004673373885452747, -0.005854802671819925, 0.0036601864267140627, -0.010999253019690514, -0.005009856540709734, 0.01256202906370163, 0.00040050814277492464, -0.0023460336960852146, 0.020443206652998924, -0.002660084282979369, 0.01466317754238844, -0.0042396849021315575, 0.010894568637013435, -0.017317654564976692, 0.0054846713319420815, 0.008456937037408352, -0.025976480916142464, 0.013571477495133877, -0.005847325548529625, -0.00533138494938612, -0.00827747955918312, 0.015986677259206772, 0.004912650678306818, 0.01695873960852623, 0.0063370950520038605, -0.014453810639679432, 0.04169397056102753, -0.0002556335530243814, -0.010789885185658932, -0.004217252600938082, 0.0034975530579686165, 0.014805248007178307, -0.005522058345377445, -0.004378016572445631, -0.027038272470235825, 0.0006122119957581162, -0.02222282625734806, 0.01244239043444395, -0.008150364272296429, 0.016270818188786507, 0.022611651569604874, 0.02146013267338276, -0.034336213022470474, -0.014401468448340893, -0.004235946107655764, 0.014102373272180557, 0.02786078490316868, 0.0004991163150407374, -0.007279247045516968, 0.0024077221751213074, 0.003280708333477378, 0.006789477542042732, 0.003241452155634761, 0.003628407372161746, -0.0051706209778785706, -0.010206648148596287, -0.009586024098098278, -0.02108626253902912, -0.001602967269718647, -0.016016587615013123, 0.0050696758553385735, -0.008860716596245766, 0.03305010125041008, 0.013392020016908646, 0.01962069235742092, -0.003194718388840556, -0.0032788391690701246, 0.018902862444519997, 0.003471381962299347, 4.6762947022216395e-05, 0.007092311978340149, -0.0017908368026837707, 0.0004002744681201875, -0.03547277674078941, -0.006995106115937233, 0.020832031965255737, -0.02458568476140499, -0.0024021142162382603, -0.0012917205458506942, 0.01111141312867403, 0.03137516230344772, 0.01090952381491661, -0.014939841814339161, 0.0004759831354022026, 0.0010776800336316228, -0.012419958598911762, 0.011724560521543026, -0.010139351710677147, 0.01806539297103882, 0.01565767079591751, -0.017571885138750076, 0.00974304974079132, -0.0005365033284761012, -0.008150364272296429, 0.0031535925809293985, -0.010423492640256882, 0.0004051815194543451, 0.00883080717176199, -0.00976548157632351, -0.012173203751444817, 0.00862891785800457, 0.012726531364023685, 0.019680511206388474, 0.007013799622654915, -0.0072867246344685555, 0.012614370323717594, 0.023568758741021156, -0.020802121609449387, 0.012517164461314678, -0.02012915536761284, 0.0021834003273397684, -0.024735232815146446, -0.0043181972578167915, -0.009705662727355957, 0.006834341678768396, -0.007357759866863489, 0.0027498132549226284, 0.016973692923784256, 0.02135544829070568, -0.008561620488762856, -0.01006457768380642, 0.0006855839164927602, 0.006942763924598694, 0.016016587615013123, -0.002828325843438506, 0.02325470745563507, -0.013960301876068115, -0.004355584271252155, 0.005925837904214859, -0.003211542498320341, -0.012345184572041035, 0.01685405522584915, -0.02246210351586342, -0.011821766383945942, -0.01918700337409973, -0.01985996961593628, -0.00607164716348052, 0.017078377306461334, 0.0030862961430102587, 0.0022170485462993383, 0.008427027612924576, -0.008815851993858814, -0.0009589763358235359, -0.016510095447301865, -0.04357827454805374, 0.007851268164813519, -0.01942628063261509, 0.015111821703612804, -0.0050846305675804615, -0.012920944020152092, -0.013534090481698513, 0.01159744430333376, 0.02485487051308155, 0.008292434737086296, 0.04265107959508896, -0.0018487867200747132, 0.010154306888580322, 0.018080348148941994, -0.0036583170294761658, 0.016599824652075768, 0.009548637084662914, -0.007873700000345707, -0.010640337131917477, 0.02129562944173813, -0.0299245472997427, 0.016151180490851402, -0.007783971261233091, -0.011687173508107662, -0.014087418094277382, -0.0034433419350534678, 0.0035741962492465973, 0.008673781529068947, -0.023897765204310417, 0.01264427974820137, 0.005447284318506718, -0.012277888134121895, -0.024899736046791077, 0.018843043595552444, 0.00487152487039566, -0.0037592619191855192, -0.0005444480339065194, -0.004793012049049139, -0.042501531541347504, 0.021445177495479584, -0.020682483911514282, -0.023703351616859436, 0.004882740788161755, 0.014154714532196522, -0.016569914296269417, 0.011208619922399521, -0.013137788511812687, -0.005959486123174429, 0.017272789031267166, 0.015538033097982407, 0.00459859985858202, -0.029146898537874222, 0.005903405603021383, 0.00786622241139412, 0.01845421828329563, 0.01769152469933033, -0.024615595117211342, 0.000111868888780009, 0.01985996961593628, 0.0009842125000432134, -0.008105499669909477, -0.0033442662097513676, 0.013287336565554142, -0.012606893666088581, -0.02762150950729847, 0.011918972246348858, 0.020338522270321846, -0.01959078386425972, -0.0010580518282949924, 0.010812317952513695, 0.00859900750219822, 0.009862688370049, -0.005847325548529625, -0.02035347744822502, 0.012053566053509712, -0.0037630004808306694, 0.003751784563064575, 0.0013048059772700071, 0.007724152412265539, 0.014610836282372475, 0.0029367480892688036, 0.015642717480659485, 0.00933179259300232, 0.01317517552524805, -0.00523791741579771, -0.0248847808688879, -0.005959486123174429, -0.00570525461807847, -0.006516552530229092, -0.009825301356613636, 0.03322955593466759, -0.015807218849658966, 0.012464822269976139, 0.004602338653057814, -0.006030521355569363, 0.023673443123698235, -0.027277547866106033, 0.0023665966000407934, -0.023538848385214806, -0.012061042711138725, 0.005836109165102243, -0.00720447301864624, -0.010991775430738926, 0.0001282841112697497, -0.0004921062500216067, 0.0042060362175107, 0.01603154093027115, -0.014446333050727844, -0.003966759890317917, 0.03631024435162544, 0.006939025595784187, -0.03035823628306389, 0.007761539425700903, 0.00717082479968667, 0.015149208717048168, 0.008808375336229801, 0.000447475555120036, 0.010528176091611385, 0.01502957008779049, -0.016584869474172592, -0.005780028644949198, -0.010774930939078331, 0.0044714841060340405, -0.015582897700369358, 0.014356604777276516, 0.006991367321461439, -0.023179933428764343, -0.009772959165275097, 0.01719801500439644, -0.013990212231874466, 0.0019114098977297544, -0.0031909795943647623, 0.00426585553213954, -0.01514173112809658, 0.014222010970115662, -0.007851268164813519, 0.01809530332684517, 0.009099993854761124, -0.030896609649062157, -0.011918972246348858, 0.011612399481236935, -0.01722792536020279, 0.0034508192911744118, 0.01447624247521162, -0.015283801592886448, -0.024241724982857704, -0.003944327589124441, -0.006550200749188662, 0.011059071868658066, 0.0211909469217062, 0.017272789031267166, 0.005544490646570921, 0.021250765770673752, -0.007948474027216434, -0.007473659235984087, 0.01550812367349863, -0.007193257100880146, 0.021131126210093498, 0.0008351319120265543, -0.01815512217581272, 0.031734079122543335, 0.010887091979384422, -0.016435321420431137, -0.012868601828813553, -0.003787302179262042, 0.01728774420917034, 0.02808510698378086, 0.015343621373176575, 0.020906805992126465, -0.018050439655780792, -0.013780844397842884, 0.030896609649062157, -0.002054415177553892, -0.016734417527914047, -0.0025740943383425474, -0.007626946084201336, -0.00994493905454874, 0.0009879511781036854, -0.0013954694150015712, -0.016001632437109947, -0.0011618006974458694, 0.0050995852798223495, 0.024271633476018906, 0.026514854282140732, 0.02582693286240101, -0.012913466431200504, -0.009212154895067215, 0.007836312986910343, 0.019770240411162376, -0.0038658147677779198, 0.008972877636551857, 0.013848140835762024, 0.021041398867964745, 0.013115356676280499, -0.0027442050632089376, 0.007361498661339283, -0.007769016548991203, 0.0027292503509670496, 0.008928013034164906, 0.006755829323083162, -0.014005166478455067, 0.00719699589535594, -0.008195227943360806, -0.005350078456103802, -0.00433315197005868, 0.004920127801597118, -6.74134207656607e-05, -0.0017571885837242007, 0.02029365859925747, -0.008367208763957024, 0.007806403562426567, 0.007873700000345707, -0.0007524131797254086, 0.018917817622423172, 0.015552988275885582, 0.006550200749188662, 0.0015225851675495505, -0.01962069235742092, -0.0017001733649522066, -0.010206648148596287, -0.013279858976602554, 0.01374345738440752, -0.009040174074470997, 0.006299708038568497, -0.010236557573080063, 0.013421929441392422, 0.018499083817005157, 0.005006118211895227, 0.021250765770673752, 0.012584460899233818, 0.013459316454827785, -0.029610496014356613, -0.02021888457238674, 0.001238444005139172, 0.0218639113008976, -0.0023946366272866726, -0.005952008999884129, -0.015971722081303596, 0.01733260788023472, 0.02211814373731613, 0.010887091979384422, 0.008688736706972122, 0.0008510213810950518, -0.014857590198516846, -0.02069743722677231, 0.015291279181838036, -0.007955951616168022, -0.020742302760481834, -0.004662157502025366, 0.017138196155428886, 0.017811162397265434, -0.012846169993281364, -0.014207056723535061, -0.008053157478570938, -0.010154306888580322, 0.02868329919874668, -0.0025385767221450806, 0.00030914368107914925, -0.010184216313064098, -0.0044714841060340405, 0.003918156493455172, 0.01502957008779049, 0.008441982790827751, 0.013264903798699379, -0.0037162669468671083, -0.0035125077702105045, -0.010827272199094296, -0.0058660185895860195, 0.013392020016908646, -0.002074978081509471, -0.002071239287033677, 0.002841411391273141, -0.016839100047945976, -0.005690299905836582, 0.01923186704516411, -0.00951125007122755, 0.007073618471622467, 0.009421521797776222, -0.0011244136840105057, 0.0014851981541141868, 0.012719053775072098, 0.010012236423790455, 0.0053725107572972775, 0.01189654041081667, -0.01550812367349863, -0.01612127013504505, 0.02786078490316868, 0.0013590171001851559, -0.005208007991313934, -0.01905241049826145, -0.001162735396064818, 0.009690707549452782, -0.01233022939413786, 0.000513603794388473, -0.011283393949270248, 0.004673373885452747, -0.0006224933895282447, -0.020936714485287666, 0.029834818094968796, -0.011881585232913494, 0.03822445869445801, 0.004273333121091127, -0.007955951616168022, 0.01329481415450573, 0.01127591636031866, -0.00859900750219822, 0.022372374311089516, 0.026933588087558746, 0.025109102949500084, 0.0009211219730786979, 0.007174563594162464, -0.0015552988043054938, 0.005566922947764397, 0.0022151791490614414, 0.008045680820941925, -0.006277275737375021, -0.018514037132263184, -0.00417238799855113, 0.024600639939308167, 0.0001631007471587509, -0.005854802671819925, -0.0017758820904418826, -0.011754469946026802, -0.0211909469217062, -0.0026021345984190702, 0.011059071868658066, -0.006527768447995186, 0.010184216313064098, 0.004497654736042023, 0.019037455320358276, -0.011746992357075214, 0.0012870471691712737, 0.03068724274635315, 0.03601114824414253, 0.022237781435251236, 0.001702977460809052, -0.014423901215195656, 0.0008047549636103213, -0.022043369710445404, -0.0055856164544820786, 0.011806811206042767, -0.0002715230220928788, 0.005581877660006285, -0.014565971679985523, 0.027142954990267754, -0.025871796533465385, -0.0031479846220463514, 0.0010982428211718798, -0.003624668810516596, -0.0006972673581913114, -0.008120453916490078, 0.012113384902477264, 0.01646522991359234, -0.015837129205465317, 0.03164434805512428, 0.004478961694985628, 0.008905581198632717, 0.005963224917650223, -0.0032956632785499096, 0.013668683357536793, -0.00933179259300232, 0.01017673872411251, -0.007836312986910343, -0.023912718519568443, -0.02968527004122734, -0.013526612892746925, 0.006841819267719984, -0.0038620762061327696, -0.0056641292758286, -0.020248794928193092, 0.004037795122712851, -0.00832982175052166, -1.104084549297113e-05, 0.007335327565670013, 0.01374345738440752, 0.011014207266271114, 0.016913874074816704, 0.005944531410932541, -0.013272381387650967, -0.0050995852798223495, -0.0058660185895860195, 0.01728774420917034, -0.0015281932428479195, 0.011836721561849117, -0.02216300740838051, 0.004306981340050697, -0.0029012304730713367, -0.014379036612808704, -0.003482598112896085, 0.0024338930379599333, 0.012487255036830902, -0.0018431786447763443, 0.006546461954712868, 0.014281830750405788, -0.0017609272617846727, 0.00903269648551941, -0.00040868655196391046, -0.017646659165620804, 0.010034668259322643, -0.024795051664114, -0.015448304824531078, 0.007028754334896803, -0.024869825690984726, 0.013833186589181423, -0.03248181566596031, -0.010019713081419468, -0.004153694491833448, -0.0031442458275705576, 0.008172796107828617, 0.004011624027043581, -0.024002447724342346, 0.019575828686356544, 0.003611583262681961, -0.014917409047484398, 0.001693630707450211, -0.023613622412085533, 0.004475222900509834, 0.00809802208095789, -0.01959078386425972, -0.006434300914406776, -0.027337366715073586, 0.007189518306404352, 0.01719801500439644, -0.019605737179517746, 0.009533682838082314, -7.757800631225109e-05, 0.02249201387166977, -0.015552988275885582, -0.01233022939413786, 0.01047583483159542, -0.020876895636320114, -0.03628033399581909, -0.003647100878879428, 0.01779620721936226, 0.00040401317528449, -0.018349535763263702, 0.0033274421002715826, 0.04495411738753319, 0.011537625454366207, -0.003914417698979378, 0.005551968235522509, -0.02211814373731613, -0.003033954184502363, 0.012838692404329777, 0.0067670452408492565, -0.02306029573082924, -0.014169669710099697, -0.026140984147787094, -0.030462920665740967, -0.01189654041081667, 0.006067908369004726, 0.017302699387073517, 0.003409693483263254, -0.005473455414175987, 0.003977975808084011, 0.01969546638429165, -0.014468764886260033, -0.0050846305675804615, -0.039869487285614014, 0.005955747794359922, -0.032601457089185715, 0.012569506652653217, -0.018214941024780273, 0.0007673680083826184, -0.015597852878272533, 0.012225545942783356, 0.014207056723535061, 0.0029087078291922808, 0.022402284666895866, 0.006359526887536049, -0.0014319217298179865, 0.013601386919617653, 0.013803277164697647, -0.02246210351586342, 0.00610529538244009, 0.03242199867963791, -0.0025516620371490717, 0.008501801639795303, 0.015493168495595455, -0.0002070304617518559, -0.005148188676685095, 0.0016431582625955343, 0.022432195022702217, -0.00796342920511961, 0.012621847912669182, 0.0048677860759198666, -0.00826252531260252, 0.024645503610372543, -0.006426823791116476, 0.007436272222548723, -0.003392869373783469, 0.008785942569375038, -0.019411325454711914, -0.019979607313871384, 0.0084120724350214, -0.004879002459347248, 0.004161172080785036, 0.01888790726661682, 0.016210999339818954, -0.005929576698690653, 0.03278091177344322, 0.011380599811673164, -0.013833186589181423, -0.02042825147509575], index=0, object='embedding')], model='text-embedding-3-large', object='list', usage=Usage(prompt_tokens=5, total_tokens=5), meta={'usage': {'credits_used': 2}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/openai/text-embedding-3-small.md # text-embedding-3-small {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `text-embedding-3-small` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An efficient and compact embedding model designed to enhance performance over its predecessor, text-embedding-ada-002. It transforms text into numerical representations that can be easily processed by machine learning models. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [text-embedding-3-small.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-71a7504466ddb9c4ed6fbd1ac54a88168a92ecaf%2Ftext-embedding-3-small.json?alt=media\&token=3978f8cf-2c34-429b-b100-64f93a1e9f98) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="text-embedding-3-small"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "text-embedding-3-small", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[-0.011468903161585331, -0.0633327066898346, -0.062064021825790405, -0.06338345259428024, 0.016987677663564682, 0.010542763397097588, 0.0067938026040792465, 0.03927845507860184, -0.005239664576947689, 0.015389136970043182, 7.775644917273894e-05, 0.01904294639825821, -0.005150856915861368, 0.0066605908796191216, 0.011088297702372074, 0.04001429304480553, -0.04511440172791481, -0.031108131632208824, 0.01598541811108589, -0.018015312030911446, 0.025754284113645554, 0.0005383977550081909, 0.005233321338891983, -0.024904265999794006, 0.04752490296959877, -0.022912433370947838, -0.007155377417802811, 0.047474153339862823, -0.00726955896243453, 0.025563981384038925, 0.0336962454020977, -0.04458155483007431, 0.03202158212661743, 0.02605876885354519, -0.003948778845369816, 0.0005411729798652232, -0.021732555702328682, 0.02732745185494423, 0.026566242799162865, 0.00858264695852995, -0.029154358431696892, -0.036258988082408905, 0.002943346742540598, 0.0026341050397604704, 0.03235144168138504, 0.022354211658239365, -0.04359197989106178, 0.03362012654542923, 0.05866394564509392, 0.010771126486361027, -0.038517244160175323, 0.023699017241597176, 0.020413124933838844, -0.020501932129263878, -0.007371054030954838, 0.0010284269228577614, -0.019816843792796135, 0.054502662271261215, 0.03694407641887665, -0.03022005222737789, 0.0218594241887331, 0.0032145280856639147, 0.07094480842351913, -0.01245847623795271, -0.014577179215848446, 0.0036252643913030624, -0.024282611906528473, -0.02326766401529312, -0.004430878907442093, 0.02010864019393921, 0.014234634116292, 0.011855851858854294, -0.017913818359375, 0.006397339049726725, 0.010777470655739307, 0.02065417543053627, 0.006019905675202608, -0.04524127021431923, 0.0005843875696882606, 0.013397302478551865, 0.028697632253170013, 0.0004341278108768165, -0.040471017360687256, -0.012084214948117733, -0.0008381243678741157, -0.02579234540462494, -0.027302078902721405, -0.0037013855762779713, 0.0018824097933247685, 0.0076755378395318985, -0.02712446264922619, -0.016086913645267487, 0.00975617952644825, 0.004193000495433807, 0.004763908218592405, -0.004294495098292828, 0.05754750594496727, 0.0016413598787039518, -0.000609364768024534, -0.004408676642924547, 0.0020124500151723623, -0.012788334861397743, -0.015820488333702087, -0.0642969012260437, 0.01990565098822117, -0.013295807875692844, 0.012185709550976753, -0.03420371934771538, -0.039024718105793, 0.0066161868162453175, -0.05856245383620262, -0.008493838831782341, -0.0016080569475889206, -0.01979147084057331, -0.0042374045588076115, -0.02125045657157898, 0.07510609179735184, -0.030042435973882675, -0.0631297156214714, -0.05465490370988846, 0.009565876796841621, 0.012008094228804111, -0.015427197329699993, -0.030727526172995567, -0.02184673771262169, -0.005877178627997637, 0.012617061845958233, -0.011583084240555763, -0.021301204338669777, -0.012858112342655659, 0.07404039800167084, -0.022557200863957405, -0.0037235873751342297, -0.022176595404744148, 0.025982648134231567, -0.026718484237790108, 0.004709989298135042, -0.04960554465651512, -0.014424936845898628, -0.01851009950041771, 0.07236573100090027, 0.020501932129263878, 0.017825009301304817, 0.002259843284264207, 0.0015033904928714037, -0.019499672576785088, -0.021288517862558365, 0.014868976548314095, 0.023940065875649452, 0.002930660033598542, 0.014729420654475689, 0.014348816126585007, -0.06480437517166138, 0.032655924558639526, 0.007688224781304598, -0.0184466652572155, -0.008214728906750679, 0.013562231324613094, 0.008538243360817432, 0.013384616002440453, -0.007624790538102388, 0.012280860915780067, 0.03455895185470581, -0.029154358431696892, -0.036664966493844986, -0.0038821729831397533, 0.03841575235128403, 0.03729930892586708, -0.021478820592164993, 0.006023077294230461, -0.008538243360817432, 0.032376814633607864, -0.010225593112409115, 0.002296318067237735, -0.0017936020158231258, -0.0004293702368158847, 0.053233981132507324, 0.035523150116205215, -0.003044841578230262, -0.033112652599811554, -0.006184834521263838, 0.04389646649360657, 0.00981961376965046, -0.0069143278524279594, -0.0758165568113327, 0.016264528036117554, 0.03544703125953674, -0.026413999497890472, -0.015427197329699993, 0.012902515940368176, 0.011665549129247665, 0.02532293274998665, 0.022417645901441574, 0.046484582126140594, 0.007758002728223801, 0.012788334861397743, 0.0195123590528965, -0.015363763086497784, -0.022620635107159615, 0.05232052877545357, 0.02727670595049858, 0.030955888330936432, 0.02674385905265808, -0.02165643498301506, -0.0042374045588076115, 0.022874372079968452, -0.001045078388415277, -0.020704923197627068, 0.03674108907580376, -0.016010791063308716, 0.05287874862551689, -0.010238279588520527, 0.002470762003213167, -0.023737076669931412, -0.004262777976691723, 0.03207233175635338, 0.03549777716398239, -0.010434925556182861, 0.023039301857352257, -0.011786074377596378, -0.007897557690739632, 0.039862051606178284, 0.005046190693974495, 0.01390477642416954, 0.028190158307552338, -0.0494786761701107, 0.0018142181215807796, -0.06480437517166138, 0.02813941054046154, 0.03808589279651642, 0.026413999497890472, 0.06028786301612854, 0.0208444781601429, -0.027505068108439445, -0.000441264157416299, 0.019347431138157845, -0.07292395830154419, -0.00490980688482523, -0.006194349378347397, 0.01089165173470974, -0.03463507071137428, -0.005023988429456949, 0.007060226518660784, 0.004269121680408716, -0.01625184156000614, -0.011303974315524101, -0.0033873862121254206, -0.020286256447434425, 0.023889319971203804, 0.032985784113407135, 0.056938536465168, 0.0026531354524195194, -0.016480205580592155, 0.004951039329171181, -0.009768866933882236, 0.06181028485298157, 0.04102924093604088, 0.003349325619637966, -0.028773752972483635, -0.014260007999837399, 0.01611228659749031, -0.02727670595049858, -0.02239227294921875, -0.028824500739574432, -0.01223011314868927, -0.0255386084318161, 0.007827780209481716, 0.007130003999918699, 0.015566752292215824, 0.020806416869163513, -0.018827270716428757, -0.022671382874250412, -0.0015882337465882301, 0.020045205950737, 0.022836310788989067, 0.002267772564664483, 0.013892089948058128, -0.018941450864076614, 0.004440393764525652, 0.03628436103463173, -0.022772876545786858, 0.013397302478551865, -0.016810063272714615, -0.011120014823973179, -0.01957579329609871, -0.012788334861397743, 0.008144951425492764, 0.02380051091313362, 0.005268210079520941, -0.02933197282254696, -0.010644258931279182, -0.023749763146042824, 0.01885264366865158, 0.013612979091703892, -0.009477069601416588, -0.022887058556079865, 0.04359197989106178, -0.028570763766765594, -0.026185637339949608, 0.03544703125953674, 0.0601356215775013, 0.020095953717827797, 0.013980897143483162, -0.01677200198173523, -0.02046387270092964, 0.05932366102933884, 0.01785038411617279, 0.006578126456588507, 0.034178346395492554, -0.038517244160175323, -0.028240906074643135, 0.05115333944559097, 0.013879402540624142, -0.02674385905265808, -0.05866394564509392, -0.02586846612393856, 0.01783769577741623, 0.036994826048612595, -0.000882528314832598, -0.04069938138127327, -0.06414466351270676, -0.024713963270187378, -0.02232883870601654, -0.072720967233181, -0.002510408405214548, 0.001971217803657055, 0.03341713547706604, 0.0034413053654134274, -0.017127234488725662, -0.0013820725725963712, 0.05901917815208435, 0.012280860915780067, -0.0653625950217247, 0.029103610664606094, -0.010143128223717213, -0.044023334980010986, -0.0036823551636189222, 0.004947867710143328, -0.029433468356728554, 0.007935618050396442, -0.0020552680362015963, -0.017469778656959534, 0.0013170525198802352, 0.022278090938925743, -0.04102924093604088, -0.026921473443508148, 0.037197813391685486, -0.06571783125400543, -0.03714706748723984, 0.020045205950737, -0.036055997014045715, -0.0036791835445910692, 0.03001706302165985, 0.02258257381618023, -0.029585709795355797, -0.042602408677339554, 0.0505443699657917, -0.04417557641863823, 0.006819176487624645, 0.0021646919194608927, -0.02499307505786419, 0.07114779949188232, 0.01105658058077097, 0.010041633620858192, -0.029433468356728554, 0.018294423818588257, 0.0009023514576256275, -0.01564287394285202, 0.018598906695842743, -0.04919956624507904, 0.013486110605299473, -0.018954139202833176, 0.01743171736598015, 0.06257149577140808, -0.018966825678944588, 0.00685089360922575, 0.038542620837688446, 0.004760736599564552, -0.005284068640321493, 0.03958294168114662, -0.016556326299905777, 0.008373314514756203, 0.011748014017939568, 0.010479329153895378, 0.029179731383919716, -0.02520875073969364, -0.006241925060749054, -0.038669489324092865, 0.026769232004880905, -0.029763326048851013, 0.014907036907970905, -0.036588847637176514, 0.054299674928188324, 0.02278556488454342, -0.032123077660799026, -0.030778273940086365, -0.004830514080822468, 0.0949990525841713, -0.03438133746385574, 0.02694684825837612, 0.028849873691797256, -0.026211010292172432, -0.03252905607223511, -0.005385563243180513, 0.029534962028265, 0.006711338181048632, -0.013397302478551865, -0.02520875073969364, -0.005629785358905792, -0.0010252551874145865, -0.032858915627002716, -0.05967889353632927, 0.022138535976409912, 0.05013839155435562, 0.021669121459126472, -0.021288517862558365, 0.0411561094224453, -0.04491141438484192, 0.04323675110936165, 0.05901917815208435, 0.020032519474625587, 0.013092818669974804, 0.019157128408551216, 0.011817791499197483, 0.07180751115083694, -0.02159300073981285, 0.00872854609042406, -0.00034512169077061117, -0.013524170964956284, -0.024155743420124054, -0.04572337120771408, -0.041333723813295364, 0.0013115020701661706, 0.0022328838240355253, -0.014551805332303047, -0.013980897143483162, -0.013206999748945236, -0.012927889823913574, -0.012496537528932095, 0.01965191401541233, -0.0076628513634204865, 0.04148596525192261, 0.01691155694425106, 0.06328195333480835, -0.054705653339624405, -0.028570763766765594, 0.035523150116205215, -0.026921473443508148, -0.0012702698586508632, 0.011881225742399693, 0.012908859178423882, -0.023737076669931412, -0.02586846612393856, -0.017926504835486412, -0.05424892529845238, 0.019689975306391716, 0.040268030017614365, 0.09251243621110916, -0.027022968977689743, -0.03250368312001228, 0.01456449180841446, 0.019220562651753426, -0.052827998995780945, 0.031235000118613243, -0.011703609488904476, -0.025246810168027878, 0.06394167244434357, 0.032376814633607864, 0.017558585852384567, 0.018611593171954155, -0.006628873758018017, 0.00248186313547194, 0.024079622700810432, 0.03159023076295853, -0.04102924093604088, 0.01871308870613575, -0.02806328982114792, 0.03509179875254631, -0.009889391250908375, -0.013004010543227196, -0.010777470655739307, 0.019702661782503128, -0.013004010543227196, 0.02773343212902546, 0.0005808193818666041, -0.018015312030911446, 0.031260374933481216, 0.05389369651675224, -0.0007976850611157715, -0.036893330514431, -3.714369813678786e-05, 0.014361502602696419, -0.014818228781223297, -0.013054758310317993, 0.0009594422299414873, -0.008062486536800861, 0.03534553572535515, 0.009584907442331314, 0.043947212398052216, -0.07470010966062546, -0.03603062406182289, -0.038517244160175323, 0.0037299308460205793, 0.08708246797323227, 0.014856289140880108, 0.047474153339862823, -0.01952504552900791, -0.03630973398685455, 0.04686518758535385, 0.01245847623795271, 0.04156208783388138, 0.005721764639019966, -0.042475540190935135, 0.008608020842075348, 0.010847248136997223, -0.0343305878341198, -0.05168618634343147, 0.013004010543227196, -0.023356471210718155, 0.013879402540624142, 0.0015485873445868492, -0.0033905578311532736, -0.013511484488844872, -0.009578564204275608, 0.007187094539403915, -0.004139081574976444, 0.03323952108621597, -0.020819103345274925, 0.01683543622493744, -0.03367087244987488, 0.030448416247963905, 0.01932205632328987, -0.027403574436903, 0.016467517241835594, -0.007865840569138527, -0.0061753191985189915, 0.011931972578167915, 0.01930936984717846, 0.003967809025198221, 0.041003864258527756, -0.011919286102056503, 0.06815370172262192, 0.002194823231548071, 0.03542165830731392, 0.03778140991926193, 0.025373678654432297, -0.012033467181026936, -0.019195187836885452, 0.01337192952632904, 0.024409480392932892, 0.04724578931927681, -0.006546409334987402, -0.011684579774737358, -0.023140795528888702, 0.01744440384209156, 0.0016048852121457458, 0.04897120222449303, 0.006156289018690586, -0.026819979771971703, -0.0005173851968720555, -0.032123077660799026, -0.005943784490227699, 0.014615239575505257, 0.037451550364494324, 0.02106015384197235, 0.0033176084980368614, -0.03549777716398239, -0.016429457813501358, 0.010745753534138203, 0.006184834521263838, -0.028190158307552338, 0.011177105829119682, 0.004402333404868841, 0.0072251553647220135, -0.018560847267508507, -0.02253182791173458, -0.011805104091763496, -0.013689099811017513, 0.06343419849872589, -0.013714473694562912, 0.022747503593564034, -0.0332648940384388, -0.0037045571953058243, -0.0014597794506698847, -0.025678163394331932, -0.0069650751538574696, -0.03937995061278343, -0.005185745656490326, -0.03136186674237251, -0.009007656015455723, -0.0035427999682724476, 0.016124973073601723, -0.007808750029653311, 0.011303974315524101, -0.03047378920018673, -0.04356660693883896, 0.00872854609042406, 0.016213780269026756, -0.01186219509691, 0.0008706343942321837, 0.007555013056844473, 0.008132264018058777, -0.0028259935788810253, -0.0040693036280572414, 0.02305198833346367, -0.005328472703695297, 0.0067938026040792465, 0.01719066873192787, -0.0005681325565092266, -0.002324863336980343, 0.019766096025705338, -0.01719066873192787, 0.04130835086107254, -0.04310988262295723, -0.06810295581817627, 0.02753044292330742, -0.02386394515633583, -0.012426759116351604, -0.01200175005942583, -0.030778273940086365, 0.008100546896457672, -0.0020457529462873936, 0.019740723073482513, 0.015934670343995094, 0.012674152851104736, -0.0006989655666984618, 0.017685454338788986, -0.01918250136077404, -0.0052904123440384865, 0.046890560537576675, -0.009134524501860142, -0.0009792654309421778, 0.02626175805926323, 0.05087422579526901, -0.016137659549713135, -0.006070652976632118, -0.02920510433614254, 0.01656901277601719, -0.01443762332201004, -0.026616990566253662, 0.003208184614777565, 0.03661422058939934, 0.03382311388850212, -0.003580860560759902, 0.016201093792915344, -0.018675027415156364, -0.03587838262319565, -0.020159387961030006, 0.016023479402065277, -0.006774772424250841, -0.013879402540624142, 0.009496099315583706, -0.029154358431696892, 0.018433978781104088, -0.024625156074762344, -0.004595807753503323, 0.015262268483638763, 0.026236385107040405, 0.025817718356847763, 0.010244622826576233, -0.010599854402244091, -0.0036157493013888597, 0.00721881166100502, -0.026084141805768013, 0.008068829774856567, -0.006194349378347397, 0.014640613459050655, 0.005540977232158184, -0.04683981090784073, -0.007694568485021591, -0.011443529278039932, 0.027428947389125824, 0.028114037588238716, -0.01353685837239027, -0.002814892679452896, 0.0001736511185299605, -0.004015384707599878, -0.0022582574747502804, -0.019867591559886932, 0.012908859178423882, -0.04559650272130966, -0.02654086798429489, -0.023102734237909317, 0.042932264506816864, -0.014386876486241817, 0.017063800245523453, -0.012851769104599953, -0.007459861692041159, -0.00845577847212553, 0.03651272505521774, -0.023762451484799385, 0.024168429896235466, -0.020679548382759094, -0.026109516620635986, -0.0029940942768007517, -0.023293036967515945, 0.0028481956105679274, -0.010980459861457348, 0.010897994972765446, -0.09576026350259781, -0.055872842669487, 0.029636457562446594, 0.0015787186566740274, 0.039405323565006256, -0.0005308649269863963, 0.017748888581991196, 0.002234469633549452, -0.025500547140836716, -0.038060519844293594, -0.010238279588520527, -0.028469268232584, -0.026033395901322365, 0.018928764387965202, 0.002623004140332341, 0.01764739491045475, -0.016289902850985527, 0.004345242399722338, -0.002499307505786419, 0.04163820669054985, 0.0008183011668734252, -0.008525555953383446, 0.0197280365973711, 0.008569960482418537, -0.03420371934771538, 0.03136186674237251, -0.035650018602609634, -0.013181626796722412, 0.002570670796558261, -0.03189471364021301, 0.04805774986743927, 0.0004376959695946425, -0.00533164432272315, 0.01991833746433258, 0.03910084068775177, -0.004224717617034912, -0.0035681736189872026, -0.05379220098257065, 0.012940576300024986, -0.008842727169394493, -0.05353846400976181, 0.012908859178423882, -0.0026087313890457153, 0.03222457319498062, 0.0032272147946059704, 0.005997703410685062, 0.008506526239216328, -0.012318921275436878, 0.030702151358127594, -0.010580824688076973, 0.017228728160262108, -0.02151688002049923, 0.009026686660945415, 0.03199620917439461, -0.017482465133070946, 0.00134876964148134, -0.023483339697122574, 0.007789719384163618, -0.03542165830731392, 0.004132737871259451, -0.04034414887428284, 0.042932264506816864, 0.0011798760388046503, 0.03080364689230919, -0.03661422058939934, 0.01830711029469967, -0.014742108061909676, -0.004659241996705532, -0.02131389081478119, -0.01790113002061844, -0.014983157627284527, -0.031260374933481216, -0.026312505826354027, 0.01798993907868862, -0.019867591559886932, -0.022100474685430527, 0.022937806323170662, 0.02206241339445114, 0.02111090160906315, 0.05193992331624031, -0.025982648134231567, 0.03108275681734085, -0.005975501611828804, -0.014323442243039608, -0.0016794203547760844, 0.007047539576888084, -0.04176507517695427, -0.012287204153835773, 0.015960045158863068, -0.053436968475580215, -0.007396427448838949, -0.016530951485037804, 0.018027998507022858, 0.02720058523118496, -0.01611228659749031, 0.03161560371518135, -0.0002777228655759245, -0.018624281510710716, -0.013219687156379223, -0.01963922753930092, 0.004028071649372578, -0.027834925800561905, 0.012268174439668655, 0.009515129961073399, 0.02654086798429489, 0.022087788209319115, 0.03696944937109947, -0.008487495593726635, -0.006210207939147949, 0.004808312281966209, 0.017698140814900398, -0.021466132253408432, 0.013854028657078743, -0.004050273448228836, -0.032376814633607864, 0.01439956296235323, 0.036055997014045715, -0.022823624312877655, -0.011215166188776493, -0.009673715569078922, 0.02306467480957508, 0.0422218032181263, 0.00865876767784357, 0.027834925800561905, -0.01777426339685917, 0.022341525182127953, 0.04445468634366989, 0.014818228781223297, 0.010720379650592804, 0.009806927293539047, 0.0008222658070735633, 0.037959024310112, 0.003555486910045147, -0.01139278244227171, 0.0208444781601429, 0.017216041684150696, -0.004820999223738909, -0.007130003999918699, 0.025500547140836716, -0.006647903937846422, 0.01466598641127348, -0.0018887532642111182, -0.006724025122821331, -0.018484724685549736, -0.009134524501860142, 0.0037426177877932787, -0.027682684361934662, -0.02801254205405712, -0.005851804744452238, 0.003038498107343912, -0.0040693036280572414, -0.023838572204113007, -0.034254468977451324, 0.014462997205555439, -0.03174247220158577, 0.02626175805926323, 0.0160488523542881, -0.008937878534197807, -0.014894349500536919, 0.006673277821391821, 0.010739410296082497, -0.05087422579526901, -0.03803514689207077, -0.007421801332384348, 0.01443762332201004, 0.02380051091313362, -0.05031600594520569, -0.02199898101389408, 0.0020489245653152466, 0.057446010410785675, 0.061252061277627945, -0.01347342412918806, -0.001677834545262158, 0.009661028161644936, 0.0072378418408334255, -0.006337076425552368, 0.02646474726498127, -0.027175210416316986, 0.00394243560731411, 0.017419030889868736, -0.026211010292172432, -0.0123950419947505, 0.04932643473148346, 0.020476559177041054, 0.012572658248245716, -0.054299674928188324, 0.0018538644071668386, -0.019943712279200554, 0.025982648134231567, -0.038263507187366486, 0.06921939551830292, 0.01056179404258728, 0.011259570717811584, -0.010168502107262611, -0.0343305878341198, 0.00523014971986413, 0.02400350011885166, -0.012280860915780067, 0.033188771456480026, -0.02372439019382, -0.03674108907580376, 0.001207628520205617, 0.006571782752871513, -0.024510974064469337, -0.0056075830943882465, -0.008144951425492764, 0.009451695717871189, -0.020869851112365723, -0.01456449180841446, -0.018484724685549736, -0.025297557935118675, -0.033112652599811554, 0.0013257747050374746, -0.001363835297524929, -0.03438133746385574, -0.005648815538734198, 0.0137017872184515, -0.0048590595833957195, -0.012591688893735409, -0.017076486721634865, -0.005464856047183275, 0.010631571523845196, 0.014526431448757648, 0.021948233246803284, -0.014145825989544392, -0.009267736226320267, -0.0024786912836134434, 0.0035650019999593496, -0.0060389358550310135, 0.006863580085337162, 0.013283121399581432, -0.009477069601416588, 0.02586846612393856, 0.001041906769387424, -0.03356937691569328, 0.042602408677339554, -0.015858549624681473, 0.007307619787752628, 0.0005570315406657755, 0.010390521958470345, -0.03684258088469505, 0.013232373632490635, -0.03136186674237251, 0.0027800037059932947, 0.003793365089222789, -0.036893330514431, -0.0012813707580789924, -0.015718994662165642, -0.0030400839168578386, 0.008468465879559517, 0.01049835979938507, 0.021288517862558365, 0.012350638397037983, 0.053436968475580215, 0.004687787499278784, 0.0015620671911165118, 0.03356937691569328, 0.018954139202833176, 0.012274517677724361, 0.019017573446035385, 0.002564327558502555, -0.03889784961938858, 0.026363253593444824, 0.005946956109255552, 0.03468582034111023, -0.027022968977689743, -0.03608137369155884, 0.003453992074355483, 0.009477069601416588, -0.0029972658958286047, -0.017926504835486412, -0.021148961037397385, 0.020704923197627068, 0.04610397666692734, 0.03991279751062393, 0.020210135728120804, -0.04135909676551819, 0.019892964512109756, -0.007580386940389872, 0.03590375557541847, -0.002180550480261445, -0.0106632886454463, 0.041130732744932175, -0.009071090258657932, 0.035523150116205215, -0.034254468977451324, -0.0147674810141325, 0.011741669848561287, 0.012090558186173439, -0.005372876767069101, 0.0074027711525559425, -0.020616114139556885, 0.016822749748826027, 0.028164783492684364, 0.02520875073969364, 0.03877098113298416, 0.03448282927274704, -0.006134087219834328, 0.003910718485713005, -0.006241925060749054, 0.01958847977221012, 0.030651405453681946, -0.010904339142143726, -0.006191177759319544, 0.019004885107278824, 0.016416771337389946, 0.006232410203665495, 0.04326212406158447, -0.025170689448714256, -0.005318957380950451, -0.00738374050706625, -0.03354400396347046, -0.010269996710121632, -0.013727160170674324, 0.0184466652572155, -0.041333723813295364, -0.03169172629714012, -0.0069206710904836655, 0.017139920964837074, -0.0006827105535194278, 0.023369159549474716, -0.004278636537492275, 0.008176668547093868, -0.014983157627284527, -0.019410865381360054, -0.010415895842015743, -0.03382311388850212, 0.005515603348612785, -0.012077871710062027, -0.0067938026040792465, 0.0411561094224453, 0.019486986100673676, -0.01597273163497448, -0.024612469598650932, 0.05308173596858978, -0.007929274812340736, 0.011811447329819202, 0.048412978649139404, -0.05026526004076004, -0.014145825989544392, -0.0038790013641119003, -0.03534553572535515, -0.008696828968822956, 0.03336638957262039, 0.03394998237490654, -0.032655924558639526, -0.004050273448228836, 0.012712213210761547, 0.006340248044580221, 0.0101558156311512, -0.0019632885232567787, -0.0005272967973724008, -0.019258622080087662, -0.004129566252231598, -0.0013170525198802352, 0.00014540307165589184, 0.03283354267477989, -0.01373984757810831, 0.019233249127864838, 0.013181626796722412, 0.003147922223433852, -0.01757127232849598, 0.01851009950041771, -0.025221437215805054, 0.010009916499257088, 0.013029384426772594, -0.02073029614984989, -0.0076691946014761925, -0.002892599441111088, -0.016124973073601723, 0.03252905607223511, 0.030829019844532013, 0.02286168560385704, 0.009781553409993649, 0.040268030017614365, 0.01437418907880783, -0.006188006140291691, -0.026896100491285324, -0.05967889353632927, -0.013346555642783642, 0.0005506881279870868, 0.001725410227663815, 0.009445352479815483, 0.019499672576785088, 0.03735005483031273, -0.03463507071137428, 0.00614994578063488, 0.011081954464316368, -0.028951367363333702, 0.022341525182127953, -0.006546409334987402, -0.00872854609042406, -0.008906161412596703, -0.0023137624375522137, -0.004021728411316872, 0.03420371934771538, -0.004871746525168419, -0.011113671585917473, 0.036893330514431, -0.00879198033362627, 0.030118556693196297, 0.03014393150806427, -0.020819103345274925, 0.018941450864076614, -0.046078603714704514, 0.013422676362097263, 0.0012195224408060312, 0.009083777666091919, 0.0006478217546828091, 0.009096464142203331, 0.011931972578167915, -0.003190740244463086, -0.006895297206938267, 0.004148596432060003, -0.011874881573021412, -0.0024834489449858665, -0.013130879029631615, -0.014907036907970905, 0.007161721121519804, -0.014653299935162067, 0.02237958461046219, -0.043490488082170486, 0.008233758620917797, 0.036664966493844986, 0.031260374933481216, 0.001685763825662434, 0.009603938087821007, -0.024206489324569702, 0.025094568729400635, -0.05211753770709038, -0.007447174750268459, 0.011824134737253189, 0.0039012031629681587, 0.004040758591145277, 0.0005510845803655684, 0.0018538644071668386, 0.024485601112246513, -0.011316660791635513, -0.04420094937086105, 0.014247320592403412, 0.007789719384163618, 0.050036896020174026, -0.0034349618945270777, -0.007212468422949314, -0.002510408405214548, 0.04308450594544411, -0.03869486227631569, 0.011107328347861767, 0.09439008682966232, -0.016873497515916824, -0.013435362838208675, -0.05673554912209511, 0.009661028161644936, 0.033924609422683716, -0.015071965754032135, 0.012559971772134304, -0.01453911792486906, -0.006698651239275932, -0.008183011785149574, -0.03595450520515442, -0.0007600209792144597, 0.006207036320120096, 0.01360029261559248, -0.004199343733489513, 0.011170762591063976, -0.002603973960503936, 0.01072672288864851, 0.02633787877857685, -0.025081882253289223, -0.033442508429288864, -0.015465257689356804, -0.051026470959186554, -0.04379497095942497, 0.010117754340171814, -0.05379220098257065, -0.01373984757810831, -0.027581188827753067, -0.004062960390001535, 0.0415874607861042, -0.0059152389876544476, -0.0013194313505664468, -0.013384616002440453, -0.005693219136446714, -0.005214291159063578, -0.0017555414233356714, -0.011456216685473919, 0.0008761848439462483, -0.0032668611966073513, -0.010282683186233044, 0.015871236100792885, -0.035523150116205215, -0.009724462404847145, 0.0034603355452418327, 0.004173970315605402, -0.031133504584431648, -0.01678468845784664, -0.015680933371186256, -0.012217426672577858, 0.03493955731391907, -0.0166324470192194, 0.03757841885089874, 0.01758396066725254, -0.03575151413679123, -0.02106015384197235, -0.01944892480969429, -0.004751221276819706, 0.003758476348593831, -0.010682319290935993, -0.010612541809678078, 0.023356471210718155, -0.0011299216421321034, 0.0018491068622097373, -0.002075884258374572, 0.03922770917415619, -0.029763326048851013, 0.005036675371229649, -0.0021615203004330397, 0.003787021618336439, 0.03346788138151169, 0.013232373632490635, 0.024244550615549088, -0.020349690690636635, 0.03750229999423027, -0.010301713831722736, -0.00046069087693467736, -0.027961794286966324, 0.0026198322884738445, -0.010910682380199432, -0.01837054453790188, 0.011126358062028885, 0.03321414813399315, -0.022227343171834946, -0.013118192553520203, -0.050823479890823364, -0.007929274812340736, -0.027581188827753067, -0.04765177145600319, -0.027885673567652702, -0.001037942129187286, -0.009343857876956463, -0.023178856819868088, 0.003533284878358245, 0.013562231324613094, 0.017139920964837074, -0.021402698010206223, 0.019436238333582878, 0.008830040693283081, 0.012636092491447926, 0.018814584240317345, -0.020869851112365723, -0.025386366993188858, 0.004652898292988539, 0.03762916848063469, 7.721131260041147e-05, 0.002069540787488222, 0.0197280365973711, 0.008011739701032639, -0.020083267241716385, 0.013168939389288425, 0.0024279439821839333, 0.0023613381199538708, 0.03877098113298416, 0.005667845718562603, -0.012534597888588905, -0.02492964081466198, 0.01439956296235323, 0.018751149997115135, -0.044606927782297134, 0.03577688708901405, 0.015807801857590675, 0.032655924558639526, 0.006213379558175802, -0.0045514036901295185, 0.03722318634390831, -0.038390375673770905, 0.007574043236672878, 0.04656070098280907, -0.030321547761559486, -0.01056179404258728, 0.0010387350339442492, 0.010041633620858192, -0.023432593792676926, 0.02659161575138569, 0.009166241623461246, 0.04397258535027504, -0.005950127728283405, -0.005147685296833515, -0.029534962028265, 0.01918250136077404, 0.025437112897634506, 0.037121694535017014, -0.001991833792999387, 0.008259132504463196, 0.006067480891942978, 0.008595334365963936, 0.032046958804130554, 0.013143565505743027, -0.0052935839630663395, 0.005566351115703583, 0.013612979091703892, -0.028570763766765594, 0.012261830270290375, 0.05358920991420746, -0.015465257689356804, -0.0027800037059932947, -0.018497413024306297, 0.024282611906528473, 0.024840831756591797, -0.0075359828770160675, -0.015465257689356804, 0.05521312728524208, -0.0070982868783175945, 0.0015121126780286431, 0.01223645731806755, -0.002351823030039668, -0.004272293299436569, 0.008271819911897182, -0.03623361513018608, 0.010314400307834148, -0.03504105284810066, -0.03283354267477989, -0.04790550842881203, 0.01035880483686924, 0.007136347237974405, 0.007561356294900179, 0.0024041561409831047, 0.010549107566475868, 0.034711193293333054, -0.017609333619475365, -0.02366095595061779, -0.005785198882222176, 0.03252905607223511, 0.011043894104659557, 0.005502916872501373, 0.017660081386566162, 0.0023819541092962027, 0.00798636581748724, -0.020045205950737, -0.010168502107262611, 0.014450310729444027, 0.023737076669931412, -0.004849544260650873, -0.0004939938080497086, -0.02552592195570469, -0.015630187466740608, -0.026921473443508148, 0.022747503593564034, 0.0049859280698001385, -0.003113033249974251, 0.007834123447537422, -0.025969961658120155, 0.0250057615339756, 0.03230069577693939, -0.03970981016755104, 0.002879912732169032, -0.005902552045881748, 0.052980244159698486, 0.029763326048851013, 0.0051191397942602634, 0.01246482040733099, 0.02305198833346367, -0.006035763770341873, -0.011081954464316368, -0.002402570331469178, 0.037324681878089905, 0.00728224590420723, -0.013333868235349655, -0.04874283820390701, -0.002694367663934827, -0.04805774986743927, -0.024016188457608223, -0.002415257040411234, -0.014145825989544392, 0.011982720345258713, -0.028520015999674797, -0.001601713476702571, 0.02098403312265873, -0.002550054807215929, 0.06368793547153473, 0.02674385905265808, -0.010352461598813534, -0.01200175005942583, 0.02405424788594246, 0.01417119987308979, -0.01717798039317131, -0.04250091314315796, -0.05465490370988846, -0.0004361101018730551, 0.030829019844532013, 0.03169172629714012, 0.014475683681666851, -0.010739410296082497, -0.02010864019393921, 0.028443895280361176, -0.006156289018690586, 0.03676646202802658, 0.03321414813399315, -0.007612104061990976, -0.0242572370916605, -0.04752490296959877, 0.034127600491046906, -0.03136186674237251, -0.01703842543065548, -0.03217382729053497, 0.025843093171715736, -0.014183887280523777, -0.02126314304769039, 0.005927925929427147, -0.01651826500892639, -0.012953263707458973, -0.0033302954398095608, 0.009128181263804436, -0.0036474664229899645, 0.03897397220134735, -0.009096464142203331, 0.013854028657078743, -0.019360117614269257, 0.009502442553639412, 0.008474809117615223, 0.03882173076272011, 0.028570763766765594, -0.03757841885089874, -0.0020631973166018724, 0.01824367605149746, -0.0197280365973711, -0.003923404961824417, -0.044809918850660324, 0.0024327014107257128, -0.008639737963676453, -0.005950127728283405, -0.0015168702229857445, -0.01223011314868927, -0.004843201022595167, -0.023546773940324783, -0.00706656975671649, -0.008639737963676453, 0.01771082915365696, 0.019436238333582878, -0.014082391746342182, -0.008531900122761726, -0.0006894504185765982, 0.012902515940368176, 0.01864965446293354, -0.002380368299782276, -0.0009031444205902517, -0.015592126175761223, -0.006907984148710966, -0.006584469694644213, -0.02145344577729702, -0.010853591375052929, -0.015807801857590675, 0.010714036412537098, -0.011855851858854294, 0.02258257381618023, -0.015287641435861588, 0.028951367363333702, 0.015477944165468216, -0.04303376004099846, 0.009153555147349834, 0.014462997205555439, -0.04420094937086105, -0.03694407641887665, -0.029636457562446594, 0.04204418510198593, 0.0013725574826821685, -0.020742982625961304, 0.005712249781936407, -0.012420415878295898, -0.04744878038764, 0.0035776887089014053, 0.006774772424250841, 0.027708057314157486, -0.006540066096931696, -0.04057251289486885, 0.007872183807194233, 0.022354211658239365, 0.015795115381479263, 0.015820488333702087, 0.0171145461499691, -0.01824367605149746, -0.028520015999674797, 0.00555049255490303, 0.03674108907580376, 0.027302078902721405, -0.018027998507022858, 0.016962304711341858, 0.028773752972483635, -0.009109150618314743, -0.051965296268463135, 0.0032399017363786697, -0.02419380284845829, 0.027302078902721405, -0.03288428857922554, 0.008677798323333263, 0.019283996894955635, -0.02326766401529312, 0.02326766401529312, 0.04021728038787842, 0.010428582318127155, 0.03217382729053497, 0.004763908218592405, -0.035929128527641296, -0.036461979150772095, -0.00232169171795249, 0.015452571213245392, -0.010878965258598328, -0.03958294168114662, 0.01002894714474678, 0.01403164491057396, -0.025551294907927513, 0.011062923818826675, 0.0014264765195548534, -0.014957783743739128, -0.014577179215848446, -0.022696755826473236, -0.025487860664725304, -0.011316660791635513, -0.030626030638813972, -0.017533212900161743, -0.0011838406790047884, 0.010606197640299797, -0.014691360294818878, -0.029534962028265, -0.04704280197620392, 0.003790193470194936, 0.020159387961030006, -0.0057376231998205185, 0.035726141184568405, 0.005845461506396532, -0.008849070407450199, 0.0018570361426100135, 0.004957382567226887, -0.0010894823353737593, 0.00582960294559598, -0.016987677663564682, -0.0125282546505332, -0.005090594291687012], index=0, object='embedding')], model='text-embedding-3-small', object='list', usage=Usage(prompt_tokens=5, total_tokens=5), meta={'usage': {'credits_used': 1}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/openai/text-embedding-ada-002.md # text-embedding-ada-002 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `text-embedding-ada-002` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An efficient and reliable embedding model designed to convert text into numerical representations. It serves as a foundational tool for various natural language processing (NLP) applications, enabling machines to understand and process human language more effectively. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [text-embedding-ada-002.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-56e765841e8c66f5e026fb08241d595c6bf3d1ca%2Ftext-embedding-ada-002.json?alt=media\&token=08c46a7d-738e-42be-a7d9-26f34966b447) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="text-embedding-ada-002"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "text-embedding-ada-002", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[-0.009682911448180676, -0.012292581610381603, -0.02386169135570526, -0.02174294739961624, -0.0024966278579086065, 0.001669801538810134, -0.01511541847139597, 2.5636460122768767e-05, -0.006023558788001537, -0.023474115878343582, 0.004008169751614332, 0.006117222830653191, -0.007157215382903814, -0.02139413170516491, -0.006640448700636625, -0.010658307932317257, 0.01862942986190319, -0.00039847538573667407, 0.030049968510866165, -0.02293151058256626, -0.007635224144905806, 0.0016552675515413284, 0.011181534267961979, 0.00046024512266740203, -0.030592573806643486, -0.0018296762136742473, 0.007241189945489168, -0.005377600900828838, 0.01158202812075615, 0.018009310588240623, 0.010974827222526073, 0.008358697406947613, -0.018513157963752747, -0.004602451343089342, -0.009992971085011959, -0.04622475802898407, -0.0072928667068481445, -0.0032750072423368692, 0.007544789928942919, 0.017169564962387085, 0.0067115044221282005, 0.019417500123381615, -0.003946803510189056, -0.005112757906317711, -0.007099078968167305, 0.021303696557879448, -0.012492829002439976, -0.01108464039862156, 0.006065546069294214, 0.01047097984701395, 0.026484280824661255, 0.012047117576003075, -0.028292963281273842, -0.026975208893418312, 0.003985561430454254, -0.00389835680834949, 0.010645388625562191, 0.0037368673365563154, 0.010018809698522091, -0.030024129897356033, -0.023112379014492035, 0.011666002683341503, -0.020192649215459824, 0.0011215446284040809, -0.005044932477176189, 0.005564928520470858, 0.00349786295555532, 0.033770687878131866, 0.01567094214260578, 0.01486995443701744, 0.029352333396673203, 0.006533865816891193, -0.0034235776402056217, -0.004466800019145012, -0.0018748933216556907, -0.0009366392041556537, -0.01850023865699768, 0.01586472988128662, 0.003972642123699188, 0.005988031160086393, 0.0070603215135633945, -0.02787954919040203, 0.0187973789870739, 0.01609727554023266, 0.013939774595201015, 0.010206137783825397, -0.021988412365317345, 0.024068396538496017, -0.009482664056122303, -0.007079700473695993, -0.0020428423304110765, 0.013196922838687897, 0.02582540363073349, 0.021419968456029892, -0.009392229840159416, 0.008281182497739792, -0.02225971408188343, 0.017738008871674538, -0.0022140212822705507, -0.005332383792847395, -0.002202716888859868, 0.015244610607624054, -0.028318801894783974, 0.0002262871857965365, -0.026716824620962143, 0.013629714958369732, 0.0014049587771296501, 0.0140689667314291, 0.001882967771962285, -0.00040190701838582754, -0.0007856464944779873, 0.01921079307794571, 0.02865470014512539, -0.014327350072562695, 0.001310487394221127, -0.008959438651800156, 0.008268263190984726, -0.012505748309195042, 3.716782157425769e-05, 0.00349786295555532, 0.03157442808151245, 0.022892754524946213, 0.006892372388392687, -0.024262184277176857, 0.001658497261814773, 0.0020428423304110765, 0.0005539090489037335, -0.007706279866397381, -0.007661062758415937, 0.007751496508717537, 0.019288307055830956, -0.02537323161959648, 0.00548418378457427, 0.001301605487242341, -0.01696285791695118, 0.014456541277468204, -0.018861975520849228, 0.009947754442691803, -0.018254775553941727, -0.001213593757711351, 0.02052854746580124, 0.010574333369731903, -0.021665433421730995, -0.01683366671204567, 0.006443431600928307, 0.024171750992536545, -0.0017182484734803438, -0.03658706322312355, 0.0009156455635093153, 0.028163772076368332, 0.025101929903030396, -0.008571863174438477, -0.0026920302771031857, 0.006033248268067837, -0.008151990361511707, 0.01830645091831684, -0.009405149146914482, 0.029662394896149635, -0.009883157908916473, 0.0003532583068590611, -0.006398214492946863, 0.015231691300868988, 0.036251164972782135, -0.003019853960722685, 0.01638149656355381, 0.022156361490488052, -0.00028240479878149927, -0.00033973355311900377, -0.004857604391872883, 0.01089085265994072, 0.011762896552681923, 0.024791870266199112, -0.022479340434074402, 0.010367627255618572, -0.007996960543096066, 0.004147050902247429, -0.02107115276157856, -0.007441436871886253, -0.024946900084614754, -0.014689086005091667, 0.006394984666258097, -0.005077230278402567, 0.014727843925356865, 0.003339603077620268, -0.01834520883858204, -0.003455875674262643, 0.006905291695147753, -0.008940059691667557, -0.00549387326464057, -0.019288307055830956, -0.010109243914484978, 0.018655268475413322, 0.016433173790574074, -0.0018571293912827969, -0.6697292923927307, 0.013028973713517189, 0.0002581813605502248, 0.003753016237169504, 0.009992971085011959, 0.032556287944316864, 0.0033024605363607407, 0.0023948894813656807, -0.024107154458761215, 0.00603970792144537, -0.014702005311846733, 0.016652798280119896, 0.0015115418937057257, 0.004027548711746931, 0.0004541892558336258, -0.02052854746580124, 0.0011223520850762725, -0.007312245201319456, -0.009049872867763042, 0.004815617110580206, -0.01769925095140934, 0.026006272062659264, 0.019895508885383606, -0.009579557925462723, 0.010877933353185654, 0.005380830727517605, 0.017712170258164406, -0.009863779880106449, 0.009863779880106449, -0.010025269351899624, -0.017260000109672546, 0.016588203608989716, -0.0014768216060474515, -0.013423008844256401, 0.03831823170185089, -0.021639594808220863, 0.00477040046826005, 0.04043697565793991, -0.003510782029479742, 0.029920777305960655, -0.03596694394946098, -0.0008151183137670159, 0.015683861449360847, 0.0050675407983362675, -0.01763465441763401, 0.014624490402638912, 0.001010924344882369, 0.014094804413616657, -0.007770875468850136, -0.0018361357506364584, -0.0007481001666747034, -0.012473450042307377, 0.016665717586874962, 0.00448940834030509, -0.013041893020272255, 0.009657072834670544, -0.003147430717945099, -0.004550774581730366, -0.01260910090059042, -0.003413888392969966, -0.006595231592655182, -0.013733068481087685, -0.005138596519827843, 0.002020233776420355, 0.005658592563122511, -0.028603022918105125, 0.005833001341670752, -0.009508502669632435, 0.01856483519077301, -0.003946803510189056, -0.01310648862272501, 0.014056047424674034, 0.00966999214142561, -0.005665052216500044, 0.01198252197355032, 0.003985561430454254, 0.005061081610620022, -0.0026855706237256527, -0.013255059719085693, 0.01940458081662655, -0.000392015790566802, -0.023525793105363846, -0.022466421127319336, 0.015335043892264366, 0.041031256318092346, -0.012886863201856613, -0.023538712412118912, -0.003286311635747552, 0.004573382902890444, -0.007764415815472603, 0.012124632485210896, 0.04327918961644173, -0.01206003688275814, 0.02576080709695816, -0.001552721718326211, 0.019895508885383606, -0.01975339651107788, 0.0018361357506364584, -0.0020299232564866543, -0.0013928470434620976, 0.003914505708962679, -0.005061081610620022, 0.020192649215459824, -0.0012475064722821116, 0.02087736502289772, 0.008003420196473598, -0.022208038717508316, 0.02865470014512539, 0.02145872637629509, -0.0272077526897192, -0.01564510352909565, -0.014236915856599808, 0.010425763204693794, -0.004395744763314724, 0.003491403302177787, -0.03299553692340851, 0.011678921990096569, 0.011782274581491947, 0.004466800019145012, -0.04462278261780739, 0.03924841061234474, 0.010903771966695786, 0.007512492127716541, -0.021923815831542015, -0.026716824620962143, 0.03803401067852974, -0.002378740580752492, -0.0084297526627779, -0.02669098787009716, 0.004951268434524536, 0.005335613619536161, 0.00031974923331290483, 0.032840508967638016, -0.02052854746580124, 0.010509737767279148, -0.0067115044221282005, 0.032039519399404526, -0.0008098699036054313, 0.016756152734160423, -0.010671227239072323, -0.037723951041698456, -0.021949654445052147, -0.006327159237116575, -0.0020686807110905647, -0.007777335122227669, -0.025722049176692963, -0.005406668875366449, -0.011846871115267277, -0.0265617948025465, 0.008093854412436485, -0.005429277662187815, -0.006417593453079462, 0.014120643027126789, 0.04159969836473465, 0.010064026340842247, -0.011271968483924866, -0.020244326442480087, -0.0018054527463391423, -0.03051505796611309, -0.0033557522110641003, 0.008662297390401363, -0.0025047024246305227, -0.011846871115267277, 0.0014235300477594137, -0.004369906149804592, -0.0009124157368205488, 0.006495108362287283, 0.010587252676486969, 0.002549919532611966, -0.02498565800487995, -0.0009535955614410341, -0.004147050902247429, -0.016459010541439056, 0.006918211001902819, 0.01583889126777649, -0.0016778761055320501, 0.025644535198807716, 0.012641399167478085, 0.006327159237116575, -0.015399640426039696, -0.015606346540153027, -0.0036884204018861055, -0.00271786842495203, 0.007176593877375126, 0.005296856164932251, 0.011020044796168804, -0.012712454423308372, 0.03356397897005081, -0.009883157908916473, -0.017544221132993698, -0.01683366671204567, 0.023151136934757233, -0.001649615354835987, 0.01683366671204567, 0.005145055707544088, 0.016420254483819008, 0.02409423515200615, 0.004621829837560654, -0.0013225991278886795, 0.020192649215459824, 0.0017925335559993982, 0.0005849957815371454, 0.031109340488910675, -0.023370763286948204, 0.004563693888485432, -0.021342454478144646, -0.0026048258878290653, -0.016562364995479584, 0.05020385980606079, 0.010787499137222767, 0.016691556200385094, -0.032297901809215546, 0.01352636143565178, -0.02052854746580124, 0.01486995443701744, 0.033331435173749924, -0.014805358834564686, 0.01183395180851221, -0.00276631535962224, -0.004318229854106903, -0.003614135319367051, 0.008468510583043098, -0.00034901921753771603, -0.016265224665403366, -0.012408854439854622, 0.008055097423493862, 0.0034235776402056217, -0.009159685112535954, 0.02245350182056427, -0.016420254483819008, -0.013823502697050571, -0.012815807946026325, -0.0006411133799701929, 0.013823502697050571, -0.026187140494585037, -0.028189608827233315, 0.0056198351085186005, -0.018616510555148125, 0.02733694575726986, -0.015890568494796753, 0.009437447413802147, 0.021084070205688477, 0.023370763286948204, 0.002215636195614934, 0.04030778259038925, -0.01718248426914215, 0.015658022835850716, -0.00875273160636425, -0.006259333807975054, -0.0028357559349387884, -0.028758052736520767, -0.02171711064875126, -0.01183395180851221, -0.01955961063504219, 0.005736107472330332, -0.02621297724545002, -0.012615560553967953, 0.0023415980394929647, -0.00031551014399155974, 0.021419968456029892, 0.010238435119390488, 0.03418410196900368, 0.007480194326490164, -0.021471645683050156, 0.006091384682804346, 0.00126850011292845, -0.008888382464647293, 0.010852095670998096, -0.03211703523993492, -0.000513132952619344, -0.0107939587906003, -0.004631519317626953, 0.013461765833199024, -0.013784744776785374, 0.034132424741983414, 0.009547259658575058, -0.004344068001955748, 0.005816852208226919, -0.0013460151385515928, 0.004589532036334276, -0.005251639056950808, -0.038628291338682175, 0.03829239308834076, 0.016872424632310867, 0.009159685112535954, -0.025747887790203094, -0.02422342635691166, 0.015477155335247517, -0.032969698309898376, -0.010600171983242035, -0.003475254401564598, -0.002378740580752492, 0.010735822841525078, 0.008772110566496849, 0.019004086032509804, 0.006743802223354578, 0.03206535801291466, -0.03542434051632881, -0.010787499137222767, -0.029636556282639503, -0.017738008871674538, -0.003491403302177787, -0.013080650940537453, -0.01366847287863493, 0.022686047479510307, 0.003740097163245082, 0.0062011973932385445, 0.005380830727517605, 0.011026504449546337, -0.01683366671204567, -0.000834900769405067, -0.0031328964978456497, -0.01315170619636774, 0.001504274783656001, 0.0015583737986162305, 0.008365157060325146, -0.004860834218561649, -0.0016907951794564724, 0.006233495194464922, 0.0202960018068552, -0.023474115878343582, -0.015309206210076809, -0.034003231674432755, -0.013035433366894722, 0.0950850248336792, 0.019960103556513786, 0.006508027669042349, 0.028344640508294106, -0.017531301826238632, -0.012402394786477089, -0.031626105308532715, -0.0007525411201640964, -0.00503524299710989, 0.025773726403713226, 0.004053386859595776, -0.0021639594342559576, -0.0343649685382843, 0.002543459879234433, 0.018138501793146133, -0.012938539497554302, 0.01631690002977848, -0.016459010541439056, -0.00047922012163326144, 0.008294101804494858, 0.005099839065223932, 0.02148456498980522, -0.004518476780503988, 0.024468891322612762, -0.003165194531902671, 0.002217251108959317, 0.006191507913172245, 0.018125582486391068, 0.013590957969427109, -0.019507933408021927, 0.020735254511237144, 0.03160026669502258, -0.008998195640742779, 0.0140689667314291, 0.015012064948678017, 0.004315000027418137, 0.025786645710468292, -0.004757481161504984, 0.024804789572954178, -0.011420538648962975, 0.016781989485025406, 0.03126436844468117, 0.007848390378057957, 0.006743802223354578, 0.046302273869514465, -0.020308921113610268, -0.027698680758476257, 0.029507363215088844, -0.009133846499025822, 0.002037997590377927, 0.0225180983543396, 0.009482664056122303, 0.00815845001488924, -0.011969602666795254, 0.011097559705376625, 0.013267978094518185, 0.008197207935154438, 0.0012676926562562585, -0.009237200021743774, 0.004266553092747927, 0.026742663234472275, -0.0035366204101592302, -0.007783794775605202, -0.021316615864634514, 0.0064208232797682285, -0.03131604567170143, -0.017776764929294586, 0.014495299197733402, -0.030282514169812202, -0.021290777251124382, 0.004599221516400576, -0.011343023739755154, -0.03144523873925209, -0.0009148381068371236, 0.014650329016149044, 0.02065773867070675, 0.018461480736732483, 0.006614610552787781, 0.012944999150931835, 0.013332574628293514, -0.0021397359669208527, -0.014753681607544422, -0.006530635990202427, -0.025037335231900215, -0.01869402639567852, -0.004172889050096273, 0.010309490375220776, 0.01673031412065029, -0.028706375509500504, 0.030489221215248108, -0.01374598778784275, 0.012292581610381603, 0.02772451937198639, 0.00644020177423954, 0.014779520221054554, -0.00040291633922606707, 0.013565119355916977, 0.017169564962387085, -0.009275957942008972, -0.006059086415916681, -0.009198443032801151, -0.030179159715771675, -0.01096836756914854, -0.00039120836299844086, -0.00462505966424942, -0.01265431847423315, 0.013384250923991203, -0.009179064072668552, -0.004046927206218243, -0.00926949828863144, -0.017595898360013962, -0.006120452657341957, 0.003213641233742237, -0.0028890473768115044, 0.0010343403555452824, 0.025528263300657272, -0.01165954302996397, 0.0024191129487007856, 0.013371331617236137, 0.01307419128715992, -0.004560464061796665, -0.031212693080306053, 0.025360314175486565, 0.02264728955924511, 0.017789684236049652, 0.01631690002977848, 0.0009059561998583376, -0.024753112345933914, -0.0036335140466690063, -0.009243659675121307, 0.011730598285794258, 0.01037408597767353, -0.011246129870414734, -0.011200912296772003, -0.018267692998051643, -0.0007961433148011565, 0.016523607075214386, 0.03924841061234474, -0.017337514087557793, -0.004098603967577219, -0.033098891377449036, 0.017195403575897217, -0.021768786013126373, -0.02264728955924511, -0.010645388625562191, -0.021562080830335617, -0.0053226943127810955, 0.025644535198807716, -0.018254775553941727, 0.009611856192350388, -0.007118457928299904, 0.008998195640742779, 0.002123587066307664, 0.0011683766497299075, 0.0005232260446064174, -0.016794908791780472, -0.0004111926828045398, -0.02437845803797245, 0.027156077325344086, 0.006756721064448357, 0.029145628213882446, -0.00634330790489912, 0.004218106158077717, 0.010387005284428596, 0.035579368472099304, -0.008533106185495853, 0.013487604446709156, 0.0007271065260283649, -0.01580013334751129, 0.010658307932317257, 0.029533201828598976, 0.013965613208711147, -0.014017289504408836, -0.008035718463361263, -0.014081886038184166, 0.00036294767051003873, -0.010626009665429592, -0.001724708010442555, 0.005309775471687317, -0.0017812293954193592, -0.007376840803772211, -0.009728128090500832, -0.014844115823507309, 0.004618600010871887, -0.0006516102002933621, 0.009863779880106449, 0.014482379890978336, -0.002063835971057415, 0.026742663234472275, -0.011846871115267277, 0.02444305270910263, -0.01106526143848896, 0.007990500889718533, 0.0023335234727710485, -0.006756721064448357, -0.011846871115267277, -0.001341170398518443, -0.0009245274704881012, -0.0038337609730660915, -0.008888382464647293, 0.0011966372840106487, 0.01722124218940735, -0.0068471552804112434, 0.015050822868943214, -0.001997625222429633, 0.016497768461704254, 0.007790253963321447, -0.01760881580412388, -0.00743497721850872, -0.020929040387272835, -0.01103296410292387, -0.03253044933080673, -0.01670447550714016, -0.027285268530249596, 0.0030779901426285505, 0.004999715369194746, -0.0022140212822705507, 0.02145872637629509, -0.004844685550779104, -0.040617842227220535, -0.0052936263382434845, 0.012718914076685905, 0.004941578954458237, 0.008139071986079216, 0.027259429916739464, 0.01280288863927126, -0.011426998302340508, -0.0007287214393727481, 0.012712454423308372, 0.023073621094226837, -0.003134511411190033, 0.007867769338190556, 0.028241286054253578, -0.019094519317150116, 0.00930179562419653, 0.015683861449360847, 0.023435357958078384, -0.0041082934476435184, -0.011827492155134678, 0.018060987815260887, -0.005374371074140072, 0.013913936913013458, -0.01087147369980812, -0.004579842556267977, 0.01702745445072651, 0.008965898305177689, -0.002671036636456847, 0.008701055310666561, -0.02787954919040203, 0.011226750910282135, -0.011271968483924866, -0.002782464260235429, -0.0077191987074911594, 0.0013330959482118487, -0.02344827726483345, -0.00808093510568142, -0.0020234636031091213, -0.0136038763448596, 0.021562080830335617, 0.006475729402154684, -0.0312902070581913, 0.020063458010554314, -0.00806155614554882, 0.005367911420762539, -0.007971122860908508, -0.0020977486856281757, -0.006504797842353582, -0.005891137290745974, 0.008597701787948608, 0.02191089652478695, -0.016872424632310867, -0.00457661272957921, -0.0010932839941233397, -0.01988258957862854, 0.018874894827604294, -0.003969412297010422, -0.0030973688699305058, 0.01268661580979824, 0.03338311240077019, -0.03253044933080673, -0.013552200049161911, -0.008042178116738796, -0.0140689667314291, 0.011821032501757145, -0.023409519344568253, -0.03420994058251381, 0.009385770186781883, -0.020166810601949692, 0.03051505796611309, -0.027388621121644974, 0.0095408009365201, -0.04426104575395584, 0.002415883122012019, 0.0038531399331986904, 0.013913936913013458, 0.005506792571395636, -0.008694595657289028, 0.016420254483819008, -0.031238531693816185, 0.012738293036818504, -0.020244326442480087, -0.002572527853772044, -0.004964187741279602, 0.009676451794803143, -0.009837941266596317, -0.020864445716142654, 0.02463684044778347, -0.022220958024263382, -0.01702745445072651, -0.013435927219688892, -0.004034007899463177, 0.007544789928942919, -0.021342454478144646, -0.012473450042307377, 0.01856483519077301, -0.026897693052887917, 0.009082170203328133, -0.02293151058256626, -0.004686425905674696, 0.012324879877269268, -0.04157385975122452, -0.01280288863927126, 0.028422154486179352, -0.003316994523629546, 0.005745796952396631, 0.014107723720371723, -0.009166144765913486, 0.016084356233477592, -0.012873943895101547, 0.013384250923991203, 0.004583072382956743, 0.02437845803797245, -0.016691556200385094, -0.012964378111064434, -0.007415598724037409, 0.0019233401399105787, -0.03563104569911957, 0.029765747487545013, 0.01538672111928463, 0.012144011445343494, 0.01895240880548954, 0.02692353166639805, 0.01670447550714016, 0.002020233776420355, -0.006398214492946863, -0.013119407929480076, 0.011510972864925861, -0.001425144961103797, -0.03607029840350151, -0.009308255277574062, -0.007673981599509716, -0.005128907039761543, 0.007770875468850136, -0.014236915856599808, -0.01029657106846571, -0.0059040565975010395, -0.01657528430223465, 0.0010117318015545607, -0.006501568015664816, 0.017789684236049652, -0.021497484296560287, -0.03116101585328579, -0.0005143440794199705, 0.005451885983347893, -0.007389760110527277, 0.03831823170185089, -0.006042937748134136, -0.022427663207054138, 0.017208322882652283, 0.00363028421998024, 0.021691272035241127, -0.02213052287697792, -0.019572529941797256, -0.009282417595386505, -0.005713499151170254, 0.007596466690301895, 0.018293531611561775, 0.015412559732794762, -0.03157442808151245, -0.018396886065602303, -0.019288307055830956, 0.021213263273239136, -0.004541085101664066, 0.02733694575726986, -0.00020832147856708616, -0.02318989485502243, -0.0015753302723169327, 0.0059105162508785725, 0.003843450453132391, -0.01853899657726288, 0.02061898075044155, 0.014805358834564686, 0.000898689148016274, 0.026716824620962143, 0.009992971085011959, -0.016006840392947197, 0.015076661482453346, -0.02072233520448208, 0.028447993099689484, -0.026897693052887917, 0.013003136031329632, 0.02338368259370327, 0.010554954409599304, -0.010186758823692799, -0.022737722843885422, -0.03012748435139656, -0.012311960570514202, 0.006666287314146757, 0.013901017606258392, -0.015373801812529564, 0.0034720245748758316, 0.01511541847139597, -0.0040663061663508415, -0.02424926497042179, 0.007912985980510712, -0.008242424577474594, -0.0005644058692269027, -0.009986511431634426, 0.013216301798820496, 0.0010496817994862795, -0.03397739306092262, -0.007673981599509716, -0.01479243952780962, -0.004098603967577219, -0.004163199570029974, 0.005044932477176189, 0.02880972996354103, -0.014714924618601799, -0.021859221160411835, 0.018577754497528076, -0.023047784343361855, 0.004376365803182125, 0.0025563789531588554, 0.014404864981770515, 0.008791489526629448, 0.026200057938694954, 0.22965100407600403, -0.017195403575897217, -0.0265617948025465, 0.0272077526897192, -0.009334093891084194, 0.028086256235837936, 0.033589817583560944, 0.002687185537070036, -0.013642634265124798, 0.013145246542990208, -0.014637409709393978, -0.00848142895847559, 0.005096609238535166, 0.0035398502368479967, -0.007518951781094074, -0.007770875468850136, -0.02087736502289772, -0.05214173346757889, -0.02643260359764099, -0.027259429916739464, 0.01722124218940735, -0.007338083349168301, -0.00667274696752429, -0.004993255715817213, 0.040747035294771194, 0.00804863777011633, -0.011071721091866493, 0.0247014369815588, 0.020373517647385597, 0.02154916152358055, -0.014443621970713139, 0.002420727862045169, -0.0006895602564327419, 0.017079131677746773, 0.03010164573788643, 0.0013815427664667368, -0.01698869653046131, -0.029894938692450523, 0.007241189945489168, 0.005458345636725426, -0.010406384244561195, 0.005342073272913694, -0.010722903534770012, -0.006872993893921375, -0.010981286875903606, 0.024132993072271347, -0.011026504449546337, -0.003031158121302724, 0.01198252197355032, 0.023461196571588516, -0.021885059773921967, -0.0015939015429466963, 0.003656122600659728, 0.04847269132733345, -0.009450366720557213, 0.01804806850850582, 0.04475197568535805, 0.021730029955506325, -0.023719580844044685, 0.01943041756749153, -0.0202960018068552, 0.02161375619471073, -0.007099078968167305, 0.020166810601949692, -0.016614040359854698, -0.008423293009400368, -0.0196888018399477, -0.0074220579117536545, 0.006640448700636625, -0.02238890714943409, -0.00997359212487936, -0.00046307119191624224, -0.027543650940060616, -0.005367911420762539, -0.013125867582857609, -0.041548021137714386, 0.02653595618903637, 0.004709034226834774, 0.037853140383958817, 0.024727273732423782, -0.0035366204101592302, 0.0015850196359679103, -0.010251354426145554, -0.003371901111677289, -0.005758716259151697, -0.03591526672244072, 0.016342738643288612, 0.0011489979224279523, -0.009553719311952591, -0.00815845001488924, 0.0010811722604557872, -0.03170362114906311, -0.0008583167800679803, -0.01328089740127325, -0.00046508980449289083, -0.005277477204799652, -0.01230550091713667, -0.004544314928352833, -0.01479243952780962, 0.010451601818203926, -0.00934701319783926, 0.016678636893630028, 0.023848772048950195, 0.011549729853868484, -0.0004832573758903891, 0.011827492155134678, 0.02614838257431984, 0.008307020179927349, 0.009140306152403355, -0.022686047479510307, 0.00749311363324523, -0.021187424659729004, 0.020929040387272835, 0.0010214211652055383, -0.005135366693139076, 0.020903203636407852, 0.011045882478356361, -0.009353472851216793, 0.0253473948687315, -0.008074475452303886, -0.011142776347696781, -0.033770687878131866, 0.004221335984766483, 0.005516482051461935, 0.0049964855425059795, -0.01843564212322235, -0.010251354426145554, 0.011317185126245022, -0.011859790422022343, -0.03924841061234474, 0.02254393696784973, -0.007221810985356569, 0.035992782562971115, -0.002929419744759798, 0.0013363257749006152, -0.005426047835499048, 0.014572814106941223, -0.0014945854200050235, -0.018125582486391068, 0.02180754393339157, 0.02258269302546978, 0.004069535993039608, 0.007861309684813023, -0.00930179562419653, -0.004595991689711809, -0.011459295637905598, 0.016743233427405357, -0.0029681771993637085, -0.012486369349062443, -0.0023238342255353928, 0.011601407080888748, -0.013513442128896713, -0.006543555296957493, -0.014753681607544422, 0.008474969305098057, -0.004644438624382019, -0.02435261942446232, -0.021045314148068428, 0.006931129842996597, 0.013694310560822487, -0.015050822868943214, -0.0014752066927030683, 0.023370763286948204, -0.005461575463414192, -0.022698966786265373, -0.012906242161989212, -0.16484849154949188, 0.01592932641506195, -0.003123207250609994, -0.016407335177063942, -0.008248884230852127, -0.0004965802654623985, 0.01718248426914215, 0.0009616700699552894, -0.027698680758476257, 0.00703448336571455, 0.04097957909107208, 0.03103182464838028, -0.029093950986862183, -0.013720149174332619, -0.0006346537848003209, 0.009366392157971859, -0.016652798280119896, 0.0024788640439510345, 0.037956494837999344, 0.011007125489413738, 0.028163772076368332, 0.00994129478931427, -0.003869288833811879, -0.022737722843885422, 0.01318400353193283, 0.01728583686053753, 0.003962952643632889, 0.018293531611561775, 0.0004610525502357632, 0.00857832282781601, -0.022414743900299072, -0.0020056997891515493, 0.017789684236049652, -0.009437447413802147, 0.005726417992264032, 0.0017166335601359606, -0.012531585991382599, -0.009334093891084194, 0.008668757043778896, 0.03893835097551346, 0.011898547410964966, 0.008487888611853123, -0.020954879000782967, 0.012699535116553307, 0.00804863777011633, 0.01946917548775673, 0.011078180745244026, 0.01711788773536682, -0.01789303869009018, -0.015477155335247517, -0.0005474494537338614, -0.02973990887403488, 0.0024239576887339354, -0.015761377289891243, 0.008507267571985722, -0.003123207250609994, -0.00775795616209507, 0.01791887730360031, 0.008184288628399372, -0.014017289504408836, -0.004227795638144016, -0.054208800196647644, 0.01638149656355381, -0.0024433364160358906, -0.007867769338190556, 0.0019411039538681507, -0.012583263218402863, 0.004825306590646505, -0.02759532816708088, 0.0018716634949669242, -0.010348248295485973, -0.013319655321538448, 0.0003361000563018024, -0.013319655321538448, -0.009230740368366241, 0.0053291539661586285, -0.01936582289636135, 0.008668757043778896, 0.008662297390401363, -0.021833382546901703, -0.0007089389837346971, 0.0265617948025465, -0.009069250896573067, -0.004344068001955748, -0.023138217628002167, -0.020515628159046173, 0.0011191223748028278, 0.023616226390004158, -0.0368712842464447, -0.000528474454768002, 0.04847269132733345, -0.018577754497528076, -0.027698680758476257, -0.018745703622698784, 0.008313479833304882, 0.010626009665429592, 0.016148950904607773, 0.01570970006287098, -0.008339318446815014, -0.008164909668266773, -0.013119407929480076, -0.0035301607567816973, -0.008520186878740788, 0.01374598778784275, 0.02238890714943409, -0.002244704170152545, -0.004599221516400576, 0.030334189534187317, 0.011872708797454834, -0.008113233372569084, -0.002328678732737899, 0.027001047506928444, 0.01366847287863493, 0.019985942170023918, 0.017789684236049652, -0.009592477232217789, 0.024533487856388092, -0.0375172458589077, 0.03826655447483063, -0.004570153076201677, 0.0668695792555809, 0.002651657909154892, -0.005077230278402567, -0.009191983379423618, -0.014534056186676025, -0.0005280707264319062, -0.0874885618686676, -0.032427094876766205, 0.019973022863268852, -0.0004142206162214279, -0.02997245453298092, 0.026484280824661255, 0.0011514202924445271, 0.00594281405210495, -0.03234957903623581, 0.020864445716142654, -0.007686900906264782, -0.02027016319334507, -0.013003136031329632, -0.004470029845833778, 0.012144011445343494, -0.008901301771402359, -1.185307792184176e-05, -0.009140306152403355, -0.01840980537235737, 0.009327634237706661, -0.004105063620954752, -0.004037237726151943, 0.004563693888485432, -0.0003851524961646646, -0.01856483519077301, -0.014262753538787365, -0.022298472002148628, 0.023499954491853714, 0.0058459206484258175, -0.003594756592065096, 0.016949938610196114, -0.02682017907500267, -0.00019277811225038022, -0.023680822923779488, -0.013758907094597816, -0.007447896525263786, -0.017040373757481575, -0.0016423483612015843, 0.020670657977461815, -0.019352903589606285, 0.011071721091866493, -0.009502043016254902, 0.0003389261255506426, -0.006049397401511669, 0.023267408832907677, 0.010903771966695786, 0.0010424148058518767, 0.005736107472330332, 0.0044571105390787125, -0.019417500123381615, -0.039946045726537704, -0.013513442128896713, -0.022608531638979912, 0.004053386859595776, 0.03263380005955696, -0.0215879175812006, 0.020386436954140663, 0.022014250978827477, -0.04216814041137695, -0.010122163221240044, -0.003607675665989518, -0.01225382462143898, -0.03276299312710762, 0.0047058044001460075, -0.020102214068174362, -0.025050252676010132, -0.002583832247182727, 0.008539565838873386, 0.03291802108287811, -0.019314145669341087, -0.00035648810444399714, 0.002880973042920232, -0.005203192122280598, 0.00926949828863144, 0.005303315818309784, 0.0005862069665454328, -0.0181514210999012, -0.021096989512443542, 0.018293531611561775, -0.014030208811163902, -0.02065773867070675, -0.007996960543096066, 0.007512492127716541, 0.01108464039862156, 0.04929951950907707, 0.01573553867638111, -0.008733352646231651, 0.001942718867212534, 0.015257528983056545, 0.00862354040145874, -0.029352333396673203, 0.02997245453298092, 0.01875862292945385, -0.033770687878131866, -0.027311107143759727, -0.008022799156606197, 0.006627529859542847, -0.02733694575726986, 0.0038208418991416693, -0.013500523753464222, 0.01049035880714655, -0.02383585274219513, -0.06402736157178879, 0.02424926497042179, 0.01255742460489273, 0.013229221105575562, -0.007971122860908508, -0.02357746846973896, -0.01374598778784275, 0.00024808826856315136, 0.018293531611561775, -0.01850023865699768, -0.015167094767093658, -0.0015050822403281927, -0.0002319393097423017, 0.015696780756115913, -0.014689086005091667, -0.01946917548775673, 0.020037619397044182, 0.01374598778784275, -0.002656502416357398, -0.009605396538972855, -0.028422154486179352, 0.012970837764441967, 0.0090757105499506, 0.005044932477176189, 0.012693075463175774, -0.024623921141028404, -0.005093379411846399, -0.004479719325900078, -0.018874894827604294, -0.00867521669715643, 0.005681201349943876, -0.01808682642877102, 0.011239670217037201, 0.003972642123699188, -0.005442196503281593, -0.05258098617196083, -0.00136135658249259, 0.030876794829964638, -0.005823311861604452, 0.014094804413616657, -0.021639594808220863, -0.050927333533763885, 0.007512492127716541, 0.00989607721567154, -0.00603001844137907, -0.00038293201941996813, -0.03338311240077019, -0.020037619397044182, 0.034830059856176376, 0.007816092111170292, 0.03183281421661377, 0.01220860704779625, -0.00775795616209507, -0.005635984241962433, -0.02476603165268898, -0.006113993003964424, -0.01798347197473049, -0.01083917636424303, 0.002021848689764738, -0.0020073147024959326, 0.012725373730063438, 0.013255059719085693, 0.007971122860908508, 0.010877933353185654, 0.018732784315943718, -0.030825119465589523, 0.01979215443134308, -0.006365916691720486, 0.014508217573165894, -0.02906811237335205, -0.03051505796611309, 0.01036116760224104, 0.009088629856705666, 0.006097843870520592, 0.0036109054926782846, -0.014146481640636921, 0.013093570247292519, -0.00749311363324523, -0.02264728955924511, 0.02973990887403488, 0.01089085265994072, 0.018642349168658257, -0.009482664056122303, 0.015347963199019432, 0.019327064976096153, 0.03079928085207939, -0.013590957969427109, -0.0028212217148393393, 0.011368861421942711, 0.011704759672284126, -0.02669098787009716, -0.004437732044607401, 0.010012350045144558, 0.002270542550832033, 0.0015268833376467228, 0.003969412297010422, 0.014779520221054554, -0.022660208866000175, -0.003927425015717745, 0.015218771994113922, 0.018874894827604294, 0.0039920206181705, -0.009502043016254902, -0.0419614352285862, -0.006084925029426813, 0.007673981599509716, -0.016471929848194122, -0.0010189987951889634, 0.03191032633185387, -0.01474076323211193, 0.010012350045144558, 0.0002210387756349519, -0.0022963809315115213, 0.021949654445052147, -0.01049035880714655, 0.0036819609813392162, 0.0037691653706133366, -0.018487319350242615, -0.012363636866211891, -0.0029697921127080917, 0.010865014977753162, 0.001341170398518443, 0.03591526672244072, -0.013784744776785374, -0.015373801812529564, 0.019029924646019936, 0.026716824620962143, -0.007163675036281347, 0.01352636143565178, -0.014908712357282639, 0.02113574743270874, -0.011646623723208904, -0.014185238629579544, -0.01917203515768051, -0.01392685528844595, 0.009850860573351383, 0.010645388625562191, 0.004896362312138081, -0.022492259740829468, 0.0807705968618393, -0.011323644779622555, -0.028034579008817673, 0.012034198269248009, -0.011304265819489956, 0.011866249144077301, 0.008759191259741783, 0.015244610607624054, -0.012415314093232155, 0.00979918334633112, -0.006046167574822903, -0.01609727554023266, 0.0027873090002685785, -0.03793065622448921, -0.007971122860908508, -0.021691272035241127, -0.0011877553770318627, 0.031109340488910675, -0.0002652465191204101, -0.012118172831833363, 0.029171466827392578, 0.029016435146331787, 0.013836422003805637, 0.01635565795004368, -0.03855077549815178, 0.000779590627644211, 0.0196888018399477, -0.001992780715227127, -0.013371331617236137, -0.015231691300868988, 0.02588999830186367, -0.015619265846908092, -0.022492259740829468, -0.017518382519483566, 0.024197589606046677, -0.02213052287697792, -0.003995250444859266, -0.007247649598866701, 0.010348248295485973, 0.03260796144604683, -0.0059105162508785725, -0.008390994742512703, -0.02528279833495617, -0.021833382546901703, 0.016717394813895226, -0.019895508885383606, -0.01103296410292387, -0.01074228249490261, -0.00835223775357008], index=0, object='embedding')], model='text-embedding-ada-002-v2', object='list', usage=Usage(prompt_tokens=5, total_tokens=5), meta={'usage': {'credits_used': 2}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/text-models-llm.md # Text Models (LLM)
Overview The AI/ML API provides access to text-based models, also known as **Large Language Models** (**LLM**s), and allows you to interact with them through natural language (that's why a third common name for such models is **chat models**). These models can be applied to various tasks, enabling the creation of diverse applications using our API. For example, text models can be used to: * Create a system that searches your photos using text prompts. * Act as a psychological supporter. * Play games with you through natural language. * Assist you with coding. * Perform a security assessment (pentests) on servers for vulnerabilities. * Write documentation for your services. * Serve as a grammar corrector for multiple languages with deep context understanding. * And much more.
Specific Capabilities There are several capabilities of text models that are worth mentioning separately. **Completion** allows the model to analyze a given text fragment and predict how it might continue based on the probabilities of the next possible tokens or characters. **Chat Completion** extends this functionality, enabling a simulated dialogue between the user and the model based on predefined roles (e.g., "strict language teacher" and "student"). A detailed description and examples can be found in our [Completion and Chat Completion](https://docs.aimlapi.com/capabilities/completion-or-chat-models) article. *** An evolution of chat completion includes **Assistants** (preconfigured conversational agents with specific roles) and **Threads** (a mechanism for maintaining conversation history for context). Examples of this functionality can be found in the [Managing Assistants & Threads](https://docs.aimlapi.com/solutions/openai/assistants) article. *** **Function Calling** allows a chat model to invoke external programmatic tools (e.g., a function you have written) while generating a response. A detailed description and examples are available in the [Function Calling](https://docs.aimlapi.com/capabilities/function-calling) article.
Endpoint All text and chat models use the same endpoint: `https://api.aimlapi.com/v1/chat/completions` The parameters may vary (especially for models from different developers), so it’s best to check the API schema on each model’s page for details. Example: [**o4-mini**](https://docs.aimlapi.com/api-references/openai/o4-mini#api-schema).
Quick Code Example We will call the [**gpt-4o**](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o) model using the Python programming language and the OpenAI SDK. {% hint style="info" %} If you need a more detailed explanation of how to call a model's API in code, check out our [QUICKSTART](https://github.com/aimlapi/api-docs/blob/main/docs/api-references/text-models-llm/broken-reference/README.md) section. {% endhint %} {% code overflow="wrap" %} ```python %pip install openai import os from openai import OpenAI client = OpenAI( base_url="https://api.aimlapi.com/v1", # Insert your AIML API Key in the quotation marks instead of : api_key="", ) response = client.chat.completions.create( model="gpt-4o", messages=[ { "role": "system", "content": "You are an AI assistant who knows everything.", }, { "role": "user", "content": "Tell me, why is the sky blue?" }, ], ) message = response.choices[0].message.content print(f"Assistant: {message}") ``` {% endcode %} By running this code example, we received the following response from the chat model: {% code overflow="wrap" %} ```http Assistant: The sky appears blue due to a phenomenon called Rayleigh scattering. When sunlight enters Earth's atmosphere, it collides with gas molecules and small particles. Sunlight is made up of different colors, each with different wavelengths. Blue light has a shorter wavelength and is scattered in all directions by the gas molecules in the atmosphere more than other colors with longer wavelengths, such as red or yellow. As a result, when you look up at the sky during the day, you see this scattered blue light being dispersed in all directions, making the sky appear blue to our eyes. During sunrise and sunset, the sun's light passes through a greater thickness of Earth's atmosphere, scattering the shorter blue wavelengths out of your line of sight and leaving the longer wavelengths, like red and orange, more dominant, which is why the sky often turns those colors at those times. ``` {% endcode %}
Complete Text Model List
Model ID + API Reference linkDeveloperContextModel Card
gpt-3.5-turboOpen AI16,000Chat GPT 3.5 Turbo
gpt-3.5-turbo-0125Open AI16,000Chat GPT-3.5 Turbo 0125
gpt-3.5-turbo-1106Open AI16,000Chat GPT-3.5 Turbo 1106
gpt-4oOpen AI128,000Chat GPT-4o
gpt-4o-2024-08-06Open AI128,000GPT-4o-2024-08-06
gpt-4o-2024-05-13Open AI128,000GPT-4o-2024-05-13
gpt-4o-miniOpen AI128,000Chat GPT 4o mini
gpt-4o-mini-2024-07-18Open AI128,000GPT 4o mini
chatgpt-4o-latestOpen AI128,000-
gpt-4o-audio-previewOpen AI128,000GPT-4o Audio Preview
gpt-4o-mini-audio-previewOpen AI128,000GPT-4o mini Audio
gpt-4o-search-previewOpen AI128,000GPT-4o Search Preview
gpt-4o-mini-search-previewOpen AI128,000GPT-4o Mini Search Preview
gpt-4-turboOpen AI128,000Chat GPT 4 Turbo
gpt-4-turbo-2024-04-09Open AI128,000-
gpt-4Open AI8,000Chat GPT 4
gpt-4-0125-previewOpen AI8,000-
gpt-4-1106-previewOpen AI8,000-
o1Open AI200,000OpenAI o1
openai/o3-2025-04-16Open AI200,000o3
o3-miniOpen AI200,000OpenAI o3 mini
openai/o3-proOpen AI200,000o3-pro
openai/gpt-4.1-2025-04-14Open AI1,000,000GPT-4.1
openai/gpt-4.1-mini-2025-04-14Open AI1,000,000GPT-4.1 Mini
openai/gpt-4.1-nano-2025-04-14Open AI1,000,000GPT-4.1 Nano
openai/o4-mini-2025-04-16Open AI200,000GPT-o4-mini-2025-04-16
openai/gpt-oss-20bOpen AI128,000GPT OSS 20B
openai/gpt-oss-120bOpen AI128,000GPT OSS 120B
openai/gpt-5-2025-08-07Open AI400,000GPT-5
openai/gpt-5-mini-2025-08-07Open AI400,000GPT-5 Mini
openai/gpt-5-nano-2025-08-07Open AI400,000GPT-5 Nano
openai/gpt-5-chat-latestOpen AI400,000GPT-5 Chat
openai/gpt-5-1Open AI128,000GPT-5.1
openai/gpt-5-1-chat-latestOpen AI128,000GPT-5.1 Chat Latest
openai/gpt-5-1-codexOpen AI400,000GPT-5.1 Codex
openai/gpt-5-1-codex-miniOpen AI400,000GPT-5.1 Codex Mini
openai/gpt-5-2Open AI400,000GPT-5.2
openai/gpt-5-2-chat-latestOpen AI400,000GPT-5.2 Chat Latest
openai/gpt-5-2-proOpen AI400,000GPT-5.2 Pro
openai/gpt-5-2-codexOpen AI400,000GPT-5.2 Codex
claude-3-opus-20240229Anthropic200,000Claude 3 Opus
claude-3-haiku-20240307Anthropic200,000-
claude-3-5-haiku-20241022Anthropic200,000-
claude-3-7-sonnet-20250219Anthropic200,000Claude 3.7 Sonnet
anthropic/claude-opus-4Anthropic200,000Claude 4 Opus
anthropic/claude-opus-4.1
claude-opus-4-1
claude-opus-4-1-20250805
Anthropic200,000Claude Opus 4.1
anthropic/claude-sonnet-4Anthropic200,000Claude 4 Sonnet

claude-sonnet-4-5-20250929

anthropic/claude-sonnet-4.5

claude-sonnet-4-5

Anthropic200,000Claude 4.5 Sonnet

anthropic/claude-haiku-4.5
claude-haiku-4-5

claude-haiku-4-5-20251001

Anthropic200,000Claude 4.5 Haiku
anthropic/claude-opus-4-5
claude-opus-4-5
claude-opus-4-5-20251101
Anthropic200,000Claude 4.5 Opus
Qwen/Qwen2.5-7B-Instruct-TurboAlibaba Cloud32,000Qwen 2.5 7B Instruct Turbo
qwen-maxAlibaba Cloud32,000Qwen Max
qwen-max-2025-01-25Alibaba Cloud32,000Qwen Max 2025-01-25
qwen-plusAlibaba Cloud131,000Qwen Plus
qwen-turboAlibaba Cloud1,000,000Qwen Turbo
Qwen/Qwen2.5-72B-Instruct-TurboAlibaba Cloud32,000Qwen 2.5 72B Instruct Turbo
Qwen/Qwen3-235B-A22B-fp8-tputAlibaba Cloud32,000Qwen 3 235B A22B
alibaba/qwen3-32bAlibaba Cloud131,000Qwen3-32B
alibaba/qwen3-coder-480b-a35b-instructAlibaba Cloud262,000Qwen3 Coder
alibaba/qwen3-235b-a22b-thinking-2507Alibaba Cloud262,000Qwen3 235B A22B Thinking
alibaba/qwen3-next-80b-a3b-instructAlibaba Cloud262,000Qwen3-Next-80B-A3B Instruct
alibaba/qwen3-next-80b-a3b-thinkingAlibaba Cloud262,000Qwen3-Next-80B-A3B Thinking
alibaba/qwen3-max-previewAlibaba Cloud258,000Qwen3-Max Preview
alibaba/qwen3-max-instructAlibaba Cloud262,000Qwen3-Max Instruct
qwen3-omni-30b-a3b-captionerAlibaba Cloud65,000qwen3-omni-30b-a3b-captioner
alibaba/qwen3-vl-32b-instructAlibaba Cloud126,000Qwen3 VL 32B Instruct
alibaba/qwen3-vl-32b-thinkingAlibaba Cloud126,000Qwen3 VL 32B Thinking
anthracite-org/magnum-v4-72bAnthracite32,000Magnum v4 72B
baidu/ernie-4-5-8k-previewBaidu8,000ERNIE 4.5
baidu/ernie-4.5-0.3bBaidu120,000ERNIE 4.5
baidu/ernie-4.5-21b-a3bBaidu120,000ERNIE 4.5
baidu/ernie-4.5-21b-a3b-thinkingBaidu131,000ERNIE 4.5
baidu/ernie-4.5-vl-28b-a3bBaidu30,000ERNIE 4.5 VL
baidu/ernie-4.5-vl-424b-a47bBaidu123,000ERNIE 4.5 VL
baidu/ernie-4.5-300b-a47bBaidu123,000ERNIE 4.5
baidu/ernie-4.5-300b-a47b-paddleBaidu123,000ERNIE 4.5
baidu/ernie-4-5-turbo-128kBaidu128,000ERNIE 4.5
baidu/ernie-4-5-turbo-vl-32kBaidu32,000ERNIE 4.5 VL
baidu/ernie-5-0-thinking-previewBaidu128,000ERNIE 5.0
baidu/ernie-5-0-thinking-latestBaidu128,000ERNIE 5.0
baidu/ernie-x1-turbo-32kBaidu32,000Coming Soon
baidu/ernie-x1-1-previewBaidu64,000Coming Soon
bytedance/seed-1-8ByteDance256,000Seed 1.8
cohere/command-aCohere256,000Command A
deepseek-chat or
deepseek/deepseek-chat or
deepseek/deepseek-chat-v3-0324
DeepSeek128,000DeepSeek V3
deepseek/deepseek-r1 or
deepseek-reasoner
DeepSeek128,000DeepSeek R1
deepseek/deepseek-prover-v2DeepSeek164,000DeepSeek Prover V2
deepseek/deepseek-chat-v3.1DeepSeek128,000DeepSeek V3.1 Chat
deepseek/deepseek-reasoner-v3.1DeepSeek128,000DeepSeek V3.1 Reasoner
deepseek/deepseek-thinking-v3.2-expDeepSeek128,000DeepSeek V3.2-Exp Thinking
deepseek/deepseek-non-thinking-v3.2-expDeepSeek128,000DeepSeek V3.2-Exp Non-Thinking
deepseek/deepseek-reasoner-v3.1-terminusDeepSeek128,000DeepSeek V3.1 Terminus Reasoning
deepseek/deepseek-non-reasoner-v3.1-terminusDeepSeek128,000DeepSeek V3.1 Terminus Non-Reasoning
deepseek/deepseek-v3.2-specialeDeepSeek128,000DeepSeek V3.2 Speciale
gemini-2.0-flash-expGoogle1,000,000Gemini 2.0 Flash Experimental
gemini-2.0-flashGoogle1,000,000Gemini 2.0 Flash
google/gemini-2.5-flash-lite-previewGoogle1,000,000
google/gemini-2.5-flashGoogle1,000,000Gemini 2.5 Flash
google/gemini-3-flash-previewGoogle1,000,000Gemini 3 Flash
google/gemini-2.5-proGoogle1,000,000Gemini 2.5 Pro
google/gemini-3-pro-previewGoogle200,000Gemini 3 Pro Preview
google/gemma-3-4b-itGoogle128,000Gemma 3 (4B)
google/gemma-3-12b-itGoogle128,000Gemma 3 (12B)
google/gemma-3-27b-itGoogle128,000Gemma 3 (27B)
google/gemma-3n-e4b-itGoogle8,192Gemma 3n 4B
gryphe/mythomax-l2-13bGryphe4,000MythoMax-L2 (13B)
mistralai/Mixtral-8x7B-Instruct-v0.1Mistral AI64,000Mixtral-8x7B Instruct v0.1
meta-llama/Llama-3.3-70B-Instruct-TurboMeta128,000Meta Llama 3.3 70B Instruct Turbo
meta-llama/Llama-3.2-3B-Instruct-TurboMeta131,000Llama 3.2 3B Instruct Turbo
meta-llama/Meta-Llama-3-8B-Instruct-LiteMeta9,000Llama 3 8B Instruct Lite
meta-llama/Meta-Llama-3.1-405B-Instruct-TurboMeta4,000Llama 3.1 (405B) Instruct Turbo
meta-llama/Meta-Llama-3.1-8B-Instruct-TurboMeta128,000Llama 3.1 8B Instruct Turbo
meta-llama/Meta-Llama-3.1-70B-Instruct-TurboMeta128,000Llama 3.1 70B Instruct Turbo
meta-llama/llama-4-scoutMeta1,000,000Llama 4 Scout
meta-llama/llama-4-maverickMeta256,000Llama 4 Maverick
meta-llama/llama-3.3-70b-versatileMeta131,000Llama 3.3 70B Versatile
mistralai/Mistral-7B-Instruct-v0.2Mistral AI32,000Mistral (7B) Instruct v0.2
mistralai/Mistral-7B-Instruct-v0.3Mistral AI32,000Mistral (7B) Instruct v0.3
mistralai/mistral-tinyMistral AI32,000Mistral Tiny
mistralai/mistral-nemoMistral AI128,000Mistral Nemo
nvidia/llama-3.1-nemotron-70b-instructNVIDIA128,000Llama 3.1 Nemotron 70B Instruct
nvidia/nemotron-nano-9b-v2NVIDIA128,000Nemotron Nano 9B V2
nvidia/nemotron-nano-12b-v2-vlNVIDIA128,000Nemotron Nano 12B V2 VL
MiniMax-Text-01MiniMax1,000,000MiniMax-Text-01
minimax/m1MiniMax1,000,000MiniMax M1
minimax/m2MiniMax200,000MiniMax M2
minimax/m2-1MiniMax204,800MiniMax-M2.1
moonshot/kimi-k2-previewMoonshot131,000Kimi-K2
moonshot/kimi-k2-0905-previewMoonshot256,000Kimi-K2
moonshot/kimi-k2-turbo-previewMoonshot256,000Kimi K2 Turbo Preview
nousresearch/hermes-4-405bNousResearch131,000-
perplexity/sonarPerplexity128,000Sonar
perplexity/sonar-proPerplexity200,000Sonar Pro
x-ai/grok-3-betaxAI131,000Grok 3 Beta
x-ai/grok-3-mini-betaxAI131,000Grok 3 Beta Mini
x-ai/grok-4-07-09xAI256,000Grok 4
x-ai/grok-code-fast-1xAI256,000Grok Code Fast 1
x-ai/grok-4-fast-non-reasoningxAI2,000,000Grok 4 Fast
x-ai/grok-4-fast-reasoningxAI2,000,000Grok 4 Fast Reasoning
x-ai/grok-4-1-fast-non-reasoningxAI2,000,000Grok 4.1 Fast Non-Reasoning
x-ai/grok-4-1-fast-reasoningxAI2,000,000Grok 4.1 Fast Reasoning
zhipu/glm-4.5-airZhipu128,000GLM-4.5 Air
zhipu/glm-4.5Zhipu128,000GLM-4.5
zhipu/glm-4.6Zhipu200,000GLM-4.6
zhipu/glm-4.7Zhipu200,000GLM-4.7
--- # Source: https://docs.aimlapi.com/api-references/embedding-models/google/text-multilingual-embedding-002.md # text-multilingual-embedding-002 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `text-multilingual-embedding-002` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A state-of-the-art model designed to convert textual data into numerical vector representations, capturing the semantic meaning and context of the input text. It is particularly focused on supporting multiple languages, making it suitable for global applications. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [text-multilingual-embedding-002.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-f37cf8cef8c5a7e89270c34155d1df985c9b82c3%2Ftext-multilingual-embedding-002.json?alt=media\&token=3d45d1fb-fbfa-4bac-a932-debf8d036fbb) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="text-multilingual-embedding-002"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "text-embedding-ada-002", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.10541483759880066, -0.007883904501795769, 0.026113376021385193, 0.054472994059324265, 0.0018547746585682034, 0.02877396158874035, -0.00769799156114459, -0.003931521438062191, -0.057099711149930954, -0.020155033096671104, -0.06802556663751602, -0.019316386431455612, -0.06921651214361191, -0.007963858544826508, -0.05464792624115944, 0.012803184799849987, -0.027932647615671158, -0.0076736449263989925, 0.01603902503848076, 0.03274304419755936, -0.013360762037336826, 0.09150521457195282, 0.01921369507908821, -0.013365000486373901, 0.028981221839785576, 0.04328111559152603, 0.005482512526214123, 0.01715942844748497, -0.013837876729667187, 0.06653714925050735, -0.024773558601737022, 0.02344578132033348, -0.029198521748185158, -0.06147294491529465, -0.021522384136915207, -0.025527307763695717, -0.04977520927786827, 0.0844593346118927, 0.056941527873277664, 0.06679201871156693, 0.015225042589008808, -0.01128363236784935, 0.02270553447306156, 0.06964384764432907, 0.026676936075091362, -0.05248146504163742, -0.027674159035086632, 0.002598844701424241, 0.005095692817121744, -0.0027368897572159767, -0.047891248017549515, 0.04109680652618408, -0.00821056216955185, 0.016158444806933403, 0.05628462880849838, -0.043633244931697845, 0.037761297076940536, 0.020936453714966774, 0.0007829850655980408, 0.06270337849855423, 0.038746483623981476, 0.030863342806696892, -0.03845648840069771, 0.07812996208667755, 0.012175951153039932, 0.04687779024243355, 0.010342667810618877, -0.09965818375349045, 0.05219428241252899, -0.03436046093702316, 0.017349041998386383, -0.00011120631825178862, 0.04728730767965317, -0.026517922058701515, 0.07252275943756104, -0.0018733064644038677, -0.0007468919502571225, 0.010333850048482418, -0.03524129465222359, 0.05117466673254967, 0.00041889090789482, -0.03684258833527565, -0.08609402179718018, 0.02175017260015011, 0.02974739484488964, -0.013284076005220413, -0.027022719383239746, -0.04528650641441345, -0.052713438868522644, -0.025287456810474396, 0.004377525299787521, -0.04269234836101532, 0.014155752956867218, 0.044733673334121704, 0.003990505822002888, 0.03460191562771797, 0.015483484603464603, 0.0005434624035842717, 0.0526236854493618, -0.02392028085887432, 0.04107077419757843, -0.05365500599145889, 0.022245658561587334, -0.006321287713944912, 0.03576485812664032, -0.03458407521247864, -0.0099563617259264, 0.03043089061975479, -0.01229866873472929, 0.10173600912094116, 0.044640809297561646, -0.009982876479625702, -0.016069957986474037, 0.009124591946601868, 0.06838742643594742, -0.035522133111953735, 0.05543530732393265, -0.0013946437975391746, 0.0927274078130722, 0.034123584628105164, -0.014755425043404102, -0.017285535112023354, -0.04379658028483391, -0.030314555391669273, 0.06716187298297882, 0.00887513067573309, -0.011652286164462566, 0.005866208113729954, 0.01511433906853199, 0.02389564737677574, -0.03916412964463234, -0.02358449064195156, 0.03001827374100685, -0.060621969401836395, -0.07665064185857773, -0.03950328379869461, -0.002718890318647027, -0.014662333764135838, 0.0013778071152046323, 0.02223793789744377, 0.035168033093214035, 0.04346954822540283, 0.0468519926071167, -0.013297604396939278, 0.01414682436734438, 0.02022908255457878, -0.04937032237648964, 0.03167034685611725, 0.040002986788749695, -0.045246321707963943, -0.04559050872921944, -0.010372575372457504, 0.003129618475213647, 0.03954334184527397, 0.0016535307513549924, -0.060084063559770584, -0.02649986930191517, -0.018224049359560013, 0.10545213520526886, 0.008776690810918808, -0.03858313709497452, 0.03134262189269066, -0.0395696647465229, -0.01682112365961075, 0.050783682614564896, 0.007121562026441097, 0.0013971769949421287, 0.036619797348976135, -0.025335906073451042, -0.0283689983189106, 0.005473538301885128, 0.10782677680253983, -0.022483186796307564, -0.014317331835627556, -0.03339641913771629, 0.05742904171347618, 0.019783759489655495, 0.047230735421180725, -0.013860595412552357, 0.0033149269875139, 0.03399600088596344, 0.014117563143372536, -0.00810042954981327, -0.0009640059433877468, 0.03456024080514908, 0.01474673580378294, -0.07780274748802185, -0.017924414947628975, 0.029849525541067123, 0.009047921746969223, 0.009600736200809479, 0.06379818916320801, 0.03630800172686577, 0.023115526884794235, -0.0630350336432457, 0.08914418518543243, 0.027336396276950836, 0.0012266739504411817, -0.057352956384420395, -0.0601881705224514, 0.02066897787153721, 0.005252629518508911, -0.01602790504693985, 0.042200129479169846, 0.01646212674677372, -0.04976451396942139, -0.049337781965732574, -0.043403275310993195, -0.0017237275606021285, 0.018216293305158615, 0.011884267441928387, 0.030069757252931595, -0.04934302344918251, 0.04761583358049393, 0.018104569986462593, -0.011641259305179119, -0.02107829414308071, -0.01970154419541359, -0.09320578724145889, 0.004196232184767723, -0.036841537803411484, -0.04561808332800865, -0.04469692334532738, 0.06998735666275024, 0.015792086720466614, 0.024417148903012276, -0.11421520262956619, -0.008014567196369171, -0.0266828965395689, 0.04702435061335564, 0.003292808076366782, -0.025582270696759224, -0.04064546525478363, 0.014371630735695362, -0.03262140229344368, -0.061057932674884796, 0.053506895899772644, 0.038311976939439774, 0.05097329989075661, -0.019703472033143044, -0.06507222354412079, -0.01362341083586216, 0.01779693365097046, 0.020742127671837807, 0.034182559698820114, 0.029347538948059082, -0.012036549858748913, -0.06035514920949936, -0.011091498658061028, 0.041539814323186874, -0.028728418052196503, 0.005448998883366585, 0.027802273631095886, -0.11522385478019714, -0.026985827833414078, 0.0043976036831736565, 0.06785836815834045, 0.0010907434625551105, 0.001246947213076055, -0.0018652870785444975, -0.017310690134763718, -0.016702132299542427, 0.02318611927330494, 0.018930615857243538, -0.0013963589444756508, -0.012603643350303173, -0.01375376246869564, -0.05014721676707268, -0.07737474143505096, 0.0839085727930069, 0.005047287791967392, 0.03685343265533447, 0.03665076941251755, -0.0027055980172008276, 0.06818031519651413, 0.02106393314898014, 0.0012829666957259178, -0.022001679986715317, 0.0150755038484931, 0.03303465619683266, 0.06591128557920456, 0.007813464850187302, -0.023850057274103165, 0.016857007518410683, 0.0063236854039132595, 0.02303488552570343, -0.0401245579123497, -0.0008188458741642535, -0.013317331671714783, 0.006902510300278664, 0.0440266877412796, 0.0389348641037941, 0.013790890574455261, 0.01848253794014454, 0.047146931290626526, 0.04700205475091934, -0.02963370271027088, 0.029775753617286682, -0.005542314145714045, -0.015500043518841267, 0.021673990413546562, 0.0030377220828086138, -0.014151815325021744, 0.0289536714553833, -0.024538973346352577, 0.0005367443081922829, -0.030160577967762947, 0.002442609751597047, -0.0221922155469656, 0.016572823747992516, 0.007224297616630793, 0.05132747069001198, 0.003126216819509864, -0.01649540662765503, -0.0124611621722579, -0.0309552401304245, 0.017682349309325218, 0.027138158679008484, 0.02613825350999832, -0.04024757817387581, -0.033114124089479446, 0.03375857695937157, -0.07853676378726959, 0.03848689794540405, 0.016865670680999756, -0.04072817787528038, 0.02666451223194599, -0.013163897208869457, -0.026936916634440422, -0.00950457900762558, -0.019679944962263107, -0.036505382508039474, -0.039251092821359634, -0.0038097763899713755, -0.02909981831908226, 0.01689119264483452, -0.005631598643958569, -0.04444366693496704, -0.018400058150291443, 0.09948165714740753, 0.05777476355433464, -0.03311845287680626, -0.031255386769771576, -0.031179746612906456, 0.036125656217336655, -0.0624731220304966, -0.0010934110032394528, -0.015725823119282722, 0.02055438794195652, 0.01676548272371292, -0.030562592670321465, 0.049558937549591064, 0.006549779325723648, -0.05628368258476257, 0.0030370282474905252, 0.08227083832025528, -0.027348440140485764, -0.010132908821105957, -0.06792967766523361, -0.007587062194943428, -0.051301259547472, -0.04431433975696564, 0.021835889667272568, -0.005453919526189566, 0.021799972280859947, 0.03162528946995735, -0.030169980600476265, 0.05377469211816788, -0.009431601502001286, 0.040705256164073944, -0.026424163952469826, 0.03118319623172283, -0.011243980377912521, -0.02357340045273304, 0.009723024442791939, -0.024781029671430588, -0.01325237937271595, 0.0545160286128521, -0.03766128793358803, -0.007153951562941074, -0.043688032776117325, -0.026015624403953552, 0.014584463089704514, 0.03432517126202583, 0.008212804794311523, 0.036940645426511765, -0.01661819778382778, -0.049259286373853683, -0.03470727801322937, -0.0729813426733017, 0.027661921456456184, -0.006366741377860308, -0.0010814400156959891, 0.037873804569244385, 0.001636164146475494, 0.024390170350670815, -0.023241546005010605, -0.004419784527271986, 0.00021935513359494507, -0.0006985748768784106, -0.025970149785280228, 0.002878018422052264, -0.001978357322514057, 0.01109653152525425, -0.058058299124240875, 0.004800621420145035, -0.017448358237743378, 0.045124951750040054, -0.02438781037926674, -0.027957573533058167, 0.06991402059793472, -0.01748705841600895, 0.005101828835904598, -0.009217784740030766, -0.03929601609706879, 0.042745620012283325, 0.015901906415820122, 0.00827154703438282, -0.04116705432534218, 0.011149227619171143, -0.03674894571304321, 0.01657249964773655, 0.011382927186787128, -0.015481950715184212, 0.012986314482986927, 0.05802534148097038, -0.00427026953548193, 0.010893828235566616, -0.017678258940577507, 0.02985813096165657, 0.014119092375040054, 0.02017025649547577, -0.040343813598155975, 0.04583633691072464, 0.014568722806870937, 0.007760098669677973, -0.020953616127371788, -0.002645008033141494, -0.009975298307836056, 0.0109878433868289, 0.08978055417537689, -0.03491469472646713, -0.004549204837530851, -0.050083693116903305, 0.0663733258843422, -0.00014237713185139, 0.017145995050668716, 0.01258561946451664, -0.014209295623004436, -0.001900106086395681, -0.023971112444996834, -0.018014049157500267, 0.008617166429758072, 0.017878243699669838, 0.05544265732169151, 0.02757343277335167, -0.030268710106611252, 0.024967852979898453, -0.035106536000967026, -0.006238855421543121, 0.008238940499722958, -0.044876616448163986, 0.028921762481331825, 0.015378402546048164, -0.026995697990059853, 0.004544367082417011, -0.055179350078105927, 0.04271208122372627, -0.028120635077357292, -0.06017487868666649, -0.04825679957866669, 0.020116129890084267, -0.016937239095568657, 0.009041895158588886, -0.053952544927597046, -0.016601426526904106, -0.007171609904617071, 0.025520171970129013, -0.05172353237867355, -0.07343856245279312, -0.0027180544566363096, 0.026363555341959, 0.044201355427503586, 0.03072921559214592, -0.07209662348031998, 0.0015088224317878485, -0.0131612503901124, -0.013388683088123798, 0.007486466318368912, 0.014169180765748024, -0.01832996867597103, -0.02055710181593895, -0.013201910071074963, 0.015327156521379948, -0.013345331884920597, -0.038747526705265045, 5.188623981666751e-05, -0.047390252351760864, 0.009996161796152592, -0.03464282304048538, -0.024708427488803864, -0.01920105330646038, 0.013454530388116837, 0.04075838252902031, 0.027596058323979378, -0.041049111634492874, -0.059456340968608856, 0.014690705575048923, -0.006486057303845882, -0.012127645313739777, 0.02257193997502327, 0.05453644320368767, 0.03415334224700928, 0.018023844808340073, 0.018970809876918793, 0.014337809756398201, -0.05923903360962868, 0.006621537264436483, -0.012769654393196106, -0.01525796391069889, -0.05761175602674484, 0.014176602475345135, -0.03600331023335457, 0.014103407971560955, 0.002629275433719158, -0.042703017592430115, 0.017850475385785103, -0.06273061037063599, -0.015869449824094772, 0.0104240532964468, -0.01796787418425083, -0.05366405099630356, -0.03884487599134445, 0.04246446490287781, -0.04479784518480301, 0.03576481342315674, -0.05046236887574196, 0.06475111097097397, 0.01779775880277157, -0.022702552378177643, -0.024778805673122406, -0.032896753400564194, 0.014413094148039818, 0.014635851606726646, 0.05545400083065033, -0.029285220429301262, 0.009321717545390129, -0.02157060243189335, -0.03143489360809326, 0.017710482701659203, 0.0011548701440915465, -0.005124518182128668, 0.014864994212985039, 0.05080557242035866, 0.019815851002931595, -0.057410337030887604, 0.004832068923860788, 0.009037822484970093, 0.04442164674401283, -0.002744280034676194, -0.0016123239183798432, 0.04296940192580223, 0.06427883356809616, 0.029972447082400322, -0.0057030292227864265, 0.006052183918654919, 0.06463455408811569, 0.027237165719270706, -0.05719420313835144, -0.03605857118964195, -0.02898549847304821, 0.007863245904445648, 0.014614919200539589, -0.02659311704337597, 0.031053777784109116, -0.0026954931672662497, 0.00022154960606712848, -0.015376832336187363, -0.0517142154276371, -0.03851758688688278, -0.006986594758927822, -0.004580304026603699, 0.08692935854196548, -0.014041777700185776, -0.0488242544233799, 0.025057353079319, 0.047696955502033234, -0.004421543795615435, 0.038907237350940704, 0.022293761372566223, -0.013860499486327171, 0.0423780158162117, 0.003362421179190278, 0.03651099279522896, 0.0007563064573332667, -0.014398363418877125, 0.007066812831908464, -0.013017090037465096, -0.008531485684216022, -0.021991243585944176, 0.04669685661792755, -0.012894587591290474, -0.036281686276197433, 0.0839996337890625, 0.030898727476596832, 0.02604648470878601, 0.06207050383090973, 0.02818981558084488, 0.0044098105281591415, 0.010841469280421734, 0.05071385204792023, 0.00784313678741455, 0.011369949206709862, 0.029672332108020782, 0.0008224845514632761, -0.1018763855099678, -0.010897420346736908, 0.020405111834406853, 0.0501691997051239, 0.04700278118252754, 0.0008367817499674857, -0.013964204117655754, 0.0650942325592041, -0.04510926827788353, 0.03625116124749184, 0.0110673438757658, -0.0033491647336632013, 0.01288105733692646, -0.025371048599481583, -0.032318200916051865, 0.018797485157847404, 0.01117926649749279, 0.0009412311483174562, -0.0516565777361393, -0.030875766649842262, -0.0056093051098287106, -0.028411203995347023, 0.0583677813410759, -0.01625736989080906, 0.029757395386695862, 0.005774027202278376, 0.03949824348092079, 0.00406526168808341, 0.05045682564377785, 0.014958338811993599, 0.057375192642211914, 0.010575571097433567, 0.050990670919418335, -0.0018780494574457407, -0.08699759840965271, -0.03311669081449509, -0.03518478944897652, -0.033709704875946045, -0.035157229751348495, -0.024795537814497948, 0.05064483731985092, -0.01920083723962307, 0.0053268978372216225, 0.04345114529132843, -0.0023100650869309902, 0.020509513095021248, -0.07417090982198715, -0.002616697922348976, -0.036853764206171036, -0.0184538122266531, -0.03070104494690895, 0.007581486366689205, 0.03965935483574867, -0.016653701663017273, -0.02727309800684452, 0.0013350006192922592, 0.020473089069128036, -0.04971387982368469, -0.0531112402677536, 0.031720954924821854, 0.008794956840574741, -0.08100474625825882, -0.0611133798956871, 0.0030386624857783318, 0.07589129358530045, 0.02881506271660328, 0.03126321732997894, -0.0283834058791399, -0.019227558746933937, 0.019624903798103333, 0.027269525453448296, 0.016047395765781403, 0.019875943660736084, 0.04076007753610611, -0.027479643002152443, -0.034122537821531296, 0.04605992138385773, -0.012987939640879631, -0.05422277748584747, 0.024422643706202507, -0.0037328244652599096, -0.05694421008229256, 0.00044419063488021493, -0.005865314044058323, -0.049558524042367935, -0.009033908136188984, -0.015324024483561516, 0.03439310938119888, -0.023355944082140923, -0.00034369336208328605, 0.0718943327665329, -0.017749890685081482, 0.005547091830521822, -0.014735191129148006, -0.00652679055929184, -0.026814408600330353, -0.08084239810705185, 0.043691858649253845, 0.046687304973602295, 0.020473962649703026, 0.026001254096627235, 0.013263455592095852, -0.051931556314229965, 0.025399165228009224, -0.07429972290992737, -0.02504633367061615, 0.026031633839011192, -0.01469772681593895, 0.014889966696500778, 0.0026077907532453537, -0.0033559161238372326, 0.04125768691301346, 0.009770219214260578, 0.0009674556204117835, -0.014875940047204494, -0.06782210618257523, -0.020606884732842445, 0.002791142091155052, -0.04532765969634056, 0.0190111193805933, -0.030455727130174637, 0.03111374005675316, -0.026067044585943222, 0.031605035066604614, -0.0337536558508873, -0.023059749975800514, 0.022104579955339432, -0.034084659069776535, 0.03991895541548729, -0.0655144676566124, 0.023636501282453537, -0.04924291372299194, -0.010195080190896988, -0.049843352288007736, 0.05068054050207138, -0.010111130774021149, -0.009207621216773987, 0.021595565602183342, 0.0040507847443223, 0.003638329217210412, -0.034894753247499466, 0.00010474702139617875, 0.005512204486876726, 0.042723167687654495, -0.014063868671655655, 0.023876750841736794, -0.03542742505669594, 0.017430925741791725, -0.02914072386920452, -0.007632249500602484, 0.04023684561252594, -0.005290816072374582, 0.044632382690906525], index=0, object='embedding')], model='text-multilingual-embedding-002', object='list', usage=Usage(prompt_tokens=11, total_tokens=11), meta={'usage': {'credits_used': 1}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech.md # Text-to-Speech ## Overview Text-to-speech (TTS) models convert written text into natural-sounding speech, enabling a wide range of applications, from voice assistants and audiobooks to accessibility tools for visually impaired users. These models use deep learning techniques, such as neural vocoders and transformer-based architectures, to generate human-like speech with variations in tone, pitch, and emphasis. Many modern TTS systems support multiple languages, voices, and even emotional expressions for more engaging and realistic audio output. Advanced TTS models offer features like speaker adaptation, voice cloning, and fine-tuned prosody control, allowing for highly customizable speech synthesis. Some solutions run on-device for real-time applications, while cloud-based TTS services provide scalable, high-quality synthesis for larger workloads. Developers can integrate TTS into their applications through APIs, enabling dynamic voice generation for customer support, content creation, and assistive technologies. ## All Available Text-to-Speech Models
Model IDDeveloperContextModel Card
alibaba/qwen3-tts-flashAlibaba CloudQwen3-TTS-Flash
#g1_aura-angus-enDeepgramAura
#g1_aura-arcas-enDeepgramAura
#g1_aura-asteria-enDeepgramAura
#g1_aura-athena-enDeepgramAura
#g1_aura-helios-enDeepgramAura
#g1_aura-hera-enDeepgramAura
#g1_aura-luna-enDeepgramAura
#g1_aura-orion-enDeepgramAura
#g1_aura-orpheus-enDeepgramAura
#g1_aura-perseus-enDeepgramAura
#g1_aura-stella-enDeepgramAura
#g1_aura-zeus-enDeepgramAura
#g1_aura-2-amalthea-enDeepgramAura 2
#g1_aura-2-andromeda-enDeepgramAura 2
#g1_aura-2-apollo-enDeepgramAura 2
#g1_aura-2-arcas-enDeepgramAura 2
#g1_aura-2-aries-enDeepgramAura 2
#g1_aura-2-asteria-enDeepgramAura 2
#g1_aura-2-athena-enDeepgramAura 2
#g1_aura-2-atlas-enDeepgramAura 2
#g1_aura-2-aurora-enDeepgramAura 2
#g1_aura-2-callista-enDeepgramAura 2
#g1_aura-2-cora-enDeepgramAura 2
#g1_aura-2-cordelia-enDeepgramAura 2
#g1_aura-2-delia-enDeepgramAura 2
#g1_aura-2-draco-enDeepgramAura 2
#g1_aura-2-electra-enDeepgramAura 2
#g1_aura-2-harmonia-enDeepgramAura 2
#g1_aura-2-helena-enDeepgramAura 2
#g1_aura-2-hera-enDeepgramAura 2
#g1_aura-2-hermes-enDeepgramAura 2
#g1_aura-2-hyperion-enDeepgramAura 2
#g1_aura-2-iris-enDeepgramAura 2
#g1_aura-2-janus-enDeepgramAura 2
#g1_aura-2-juno-enDeepgramAura 2
#g1_aura-2-jupiter-enDeepgramAura 2
#g1_aura-2-luna-enDeepgramAura 2
#g1_aura-2-mars-enDeepgramAura 2
#g1_aura-2-minerva-enDeepgramAura 2
#g1_aura-2-neptune-enDeepgramAura 2
#g1_aura-2-odysseus-enDeepgramAura 2
#g1_aura-2-ophelia-enDeepgramAura 2
#g1_aura-2-orion-enDeepgramAura 2
#g1_aura-2-orpheus-enDeepgramAura 2
#g1_aura-2-pandora-enDeepgramAura 2
#g1_aura-2-phoebe-enDeepgramAura 2
#g1_aura-2-pluto-enDeepgramAura 2
#g1_aura-2-saturn-enDeepgramAura 2
#g1_aura-2-selene-enDeepgramAura 2
#g1_aura-2-thalia-enDeepgramAura 2
#g1_aura-2-theia-enDeepgramAura 2
#g1_aura-2-vesta-enDeepgramAura 2
#g1_aura-2-zeus-enDeepgramAura 2
#g1_aura-2-celeste-esDeepgramAura 2
#g1_aura-2-estrella-esDeepgramAura 2
#g1_aura-2-nestor-esDeepgramAura 2
elevenlabs/eleven_multilingual_v2ElevenLabsElevenLabs Multilingual v2
elevenlabs/eleven_turbo_v2_5ElevenLabsElevenLabs Turbo v2.5
hume/octave-2Hume AIOctave 2
inworld/tts-1InworldInworld TTS-1
inworld/tts-1-maxInworldInworld TTS-1-Max
microsoft/vibevoice-1.5bMicrosoftVibeVoice 1.5B
microsoft/vibevoice-7bMicrosoftVibeVoice 7B
openai/tts-1OpenAITTS-1
openai/tts-1-hdOpenAITTS-1 HD
openai/gpt-4o-mini-ttsOpenAIGPT-4o-mini-TTS
--- # Source: https://docs.aimlapi.com/api-references/video-models/magic/text-to-video.md # magic/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `magic/text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model allows you to embed your custom text into the selected video template — sound included.
Supported Templates

"template": "Shanghai Drone Show"

*** ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls. ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["magic/text-to-video"]},"prompt":{"type":"string","description":"Text that will appear in the video."},"template":{"type":"string","enum":["Shanghai Drone Show"],"default":"Shanghai Drone Show","description":"Video design template."}},"required":["model","prompt"],"title":"magic/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "magic/text-to-video", "prompt": "AI/ML API", "template": "Shanghai Drone Show" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: x6cwiGJM4VSX8msX6VtRX Status: queued Still waiting... Checking again in 15 seconds. Status: generating Still waiting... Checking again in 15 seconds. Status: generating Still waiting... Checking again in 15 seconds. Status: generating Still waiting... Checking again in 15 seconds. Status: generating Still waiting... Checking again in 15 seconds. Status: generating Still waiting... Checking again in 15 seconds. Status: generating Still waiting... Checking again in 15 seconds. Status: generating Still waiting... Checking again in 15 seconds. Status: generating Still waiting... Checking again in 15 seconds. Status: completed Processing complete:\n {'id': 'x6cwiGJM4VSX8msX6VtRX', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/mule/ompr/openmagic/render_tasks/254262/1657f92e15fd427298b5e60e8eedcbce.mp4?response-content-disposition=attachment%3B%20filename%3D1657f92e15fd427298b5e60e8eedcbce.mp4&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=FUQDW4Z92RG9JPURIVP1%2F20251230%2Ffsn1%2Fs3%2Faws4_request&X-Amz-Date=20251230T132555Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=c5322a3ec963563674e1ea3447e978bc9d9325a854e30bc2ad8ab2563c2db9df'}} ``` {% endcode %}
**Processing time**: \~ 2 min 22 sec. **Generated video** (608x1080, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/capabilities/thinking-reasoning.md # Thinking / Reasoning ## Overview Some text models support advanced reasoning mode, enabling them to perform multi-step problem solving, draw inferences, and follow complex instructions. This makes them well-suited for tasks like code generation, data analysis, and answering questions that require understanding context or logic. {% hint style="warning" %} Sometimes, if you give the model a serious and complex task, generating a response can take quite a while. In such cases, you might want to use streaming mode to receive the answer word by word as it is being generated. {% endhint %} ## Models That Support Thinking / Reasoning Mode ### Anthropic Special parameters, such as `thinking` in Claude models, provide transparency into the model’s step-by-step reasoning process before it gives its final answer. Supported models: * [anthropic/claude-3.7-sonnet](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-3.7-sonnet) * [anthropic/claude-opus-4](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-opus) * [anthropic/claude-sonnet-4](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-sonnet) * [anthropic/claude-opus-4.1](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-opus-4.1) * [anthropic/claude-sonnet-4.5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4-5-sonnet) * [anthropic/claude-opus-4-5](https://docs.aimlapi.com/api-references/text-models-llm/anthropic/claude-4.5-opus) ### Google Google's policy regarding reasoning models is not to provide parameters for explicitly controlling the model's reasoning activity during invocation. However, this activity does occur, and you can even inspect how many tokens it consumed by checking the `reasoning_tokens` field in the response.
Example of the "usage" section in a Gemini model response ```json "usage": { "prompt_tokens": 6, "completion_tokens": 3050, "completion_tokens_details": { "reasoning_tokens": 1097 }, "total_tokens": 3056 ```
Supported models: * [google/gemini-2.5-flash-lite-preview](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash-lite-preview) * [google/gemini-2.5-flash](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-flash) * [google/gemini-2.5-pro](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-2.5-pro) * [google/gemini-3-pro-preview](https://docs.aimlapi.com/api-references/text-models-llm/google/gemini-3-pro-preview) ### OpenAI and other vendors The standard way to control reasoning behavior in OpenAI models—and in some models from other providers—is through the `reasoning_effort` parameter, which tells the model how much internal reasoning it should perform before responding to the prompt. Accepted values are `low`, `medium`, and `high`. Lower levels prioritize speed and efficiency, while higher levels provide deeper reasoning at the cost of increased token usage and latency. The default is `medium`, offering a balance between performance and quality. Supported models: * [o1](https://docs.aimlapi.com/api-references/text-models-llm/openai/o1) * [o3-mini](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3-mini) * [openai/gpt-4.1-mini-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-mini) * [openai/gpt-4.1-nano-2025-04-14](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4.1-nano) * [openai/o3-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o3) * [openai/o4-mini-2025-04-16](https://docs.aimlapi.com/api-references/text-models-llm/openai/o4-mini) * [openai/gpt-oss-20b](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-oss-20b) * [openai/gpt-oss-120b](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-oss-120b) * [openai/gpt-5-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5) * [openai/gpt-5-mini-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-mini) * [openai/gpt-5-nano-2025-08-07](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-nano) * [openai/gpt-5-1](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5-1) * [openai/gpt-5-2](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-5.2) *** * [alibaba/qwen3-32b](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-32b) * [alibaba/qwen3-coder-480b-a35b-instruct](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-coder-480b-a35b-instruct) * [alibaba/qwen3-235b-a22b-thinking-2507](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-235b-a22b-thinking-2507) * [alibaba/qwen3-next-80b-a3b-thinking](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-next-80b-a3b-thinking) * [alibaba/qwen3-vl-32b-thinking](https://docs.aimlapi.com/api-references/text-models-llm/alibaba-cloud/qwen3-vl-32b-thinking) *** * [baidu/ernie-4.5-21b-a3b-thinking](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-4.5-21b-a3b-thinking) * [baidu/ernie-5-0-thinking-preview](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-5.0-thinking-preview) * [baidu/ernie-5-0-thinking-latest](https://docs.aimlapi.com/api-references/text-models-llm/baidu/ernie-5.0-thinking-latest) *** * [deepseek/deepseek-v3.2-speciale](https://docs.aimlapi.com/api-references/text-models-llm/deepseek/deepseek-v3.2-speciale) *** * [minimax/m2](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2) * [minimax/m2-1](https://docs.aimlapi.com/api-references/text-models-llm/minimax/m2-1) *** * [nvidia/nemotron-nano-9b-v2](https://docs.aimlapi.com/api-references/text-models-llm/nvidia/nemotron-nano-9b-v2) * [nvidia/nemotron-nano-12b-v2-vl](https://docs.aimlapi.com/api-references/text-models-llm/nvidia/llama-3.1-nemotron-70b-1) *** * [x-ai/grok-3-mini-beta](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-3-mini-beta) * [x-ai/grok-4-07-09](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4) * [x-ai/grok-code-fast-1](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-code-fast-1) * [x-ai/grok-4-fast-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-fast-reasoning) * [x-ai/grok-4-1-fast-reasoning](https://docs.aimlapi.com/api-references/text-models-llm/xai/grok-4-1-fast-reasoning) *** * [zhipu/glm-4.5-air](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5-air) * [zhipu/glm-4.5](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.5) * [zhipu/glm-4.7](https://docs.aimlapi.com/api-references/text-models-llm/zhipu/glm-4.7) --- # Source: https://docs.aimlapi.com/api-references/embedding-models/together-ai.md # Together AI - [m2-bert-80M-retrieval](/api-references/embedding-models/together-ai/m2-bert-80m-retrieval.md) --- # Source: https://docs.aimlapi.com/integrations/toolhouse.md # Toolhouse ## Overview [**Toolhouse**](https://app.toolhouse.ai/) is a Backend-as-a-Service (BaaS) to build, run, and manage AI agents. Toolhouse simplifies the process of building agents in a local environment and running them in production. With Toolhouse, you define agents as code and deploy them as APIs using a single command. Toolhouse agents are automatically connected to the Toolhouse MCP Server; it gives agents access to RAG, memory, code execution, browser use, and all the functionality agents need to perform actions. You can add MCP Servers and even define custom code that the agent can use to perform actions not covered by public MCP Servers. Toolhouse has built-in eval, prompt optimization, and agentic orchestration. For further information about the framework, please check the official documentation: * [Toolhouse Docs – Quick start (Python)](https://docs.toolhouse.ai/toolhouse/toolhouse-sdk/quick-start) * [Toolhouse SDK on GitHub (Python)](https://github.com/toolhouseai/toolhouse-sdk-python) *** ## Integration via Python ### Installation ```bash pip install toolhouse openai python-dotenv ``` > Optionally add: `pip install groq` or other SDKs, depending on the target LLM platform. *** ### Connection Setup {% hint style="warning" %} You should obtain our [**API key**](https://aimlapi.com/app/keys) first. {% endhint %} 1. Create a `.env` file in your project: ```bash TOOLHOUSE_API_KEY= AIMLAPI_KEY= ``` 2. Example Python integration (`toolhouse_example.py`): {% code overflow="wrap" %} ```python import os from dotenv import load_dotenv from toolhouse import Toolhouse from openai import OpenAI load_dotenv() th = Toolhouse(api_key=os.getenv("TOOLHOUSE_API_KEY")) client = OpenAI( api_key=os.getenv("AIMLAPI_KEY"), base_url="https://api.aimlapi.com/v1", ) MODEL = "mistralai/Mistral-7B-Instruct-v0.2" messages = [ { "role": "user", "content": "List 3 innovative uses of AI in healthcare." } ] response = client.chat.completions.create( model=MODEL, messages=messages, tools=th.get_tools() ) tool_run = th.run_tools(response) messages.extend(tool_run) response = client.chat.completions.create( model=MODEL, messages=messages, tools=th.get_tools() ) print(response.choices[0].message.content) ``` {% endcode %} *** ### GUI Integration The Toolhouse GUI () supports: * API key management * Tool selection via Bundles * Agent execution & history * Monitoring tool calls in logs Tool configuration is managed entirely through their GUI and reflected in tool discovery (`th.get_tools()`). *** ## Integration via TypeScript ### Installation Install the required dependencies: ```bash npm install @toolhouseai/sdk openai dotenv ``` *** ### Connection Setup 1. Create a `.env` file in the project root: ```env TOOLHOUSE_API_KEY= AIMLAPI_KEY= ``` 2. Create a TypeScript file (`toolhouse.ts`) with the following content: ```ts import 'dotenv/config'; import { Toolhouse } from '@toolhouseai/sdk'; import OpenAI from 'openai'; const MODEL = 'mistralai/Mistral-7B-Instruct-v0.2'; async function main() { const toolhouse = new Toolhouse({ apiKey: process.env.TOOLHOUSE_API_KEY, metadata: { id: "aimlapi-integration", timezone: "0" } }); const client = new OpenAI({ baseURL: "https://api.aimlapi.com/v1", apiKey: process.env.AIMLAPI_KEY }); const messages = [{ role: "user", content: "List three use cases of AI in retail." }]; const tools = await toolhouse.getTools(); const chatCompletion = await client.chat.completions.create({ model: MODEL, messages, tools }); const toolResponses = await toolhouse.runTools(chatCompletion); const finalMessages = [...messages, ...toolResponses]; const finalResponse = await client.chat.completions.create({ model: MODEL, messages: finalMessages, tools }); console.log(JSON.stringify(finalResponse, null, 2)); } main(); ``` 3. Run the script: ```bash npx ts-node toolhouse.ts ``` *** ### GUI Integration Toolhouse provides a browser-based GUI at [app.toolhouse.ai](https://app.toolhouse.ai/) where you can: * Manage API keys * Add and organize tools via Bundles * Monitor execution logs * Trigger and test agents visually > ✅ Toolhouse integration with AIMLAPI is fully supported via `baseURL` override in the OpenAI-compatible SDK. *** ## ✅ Supported AIMLAPI Models All chat-compatible models served by AIMLAPI are supported, including: * **Mistralai** – `Mistral-7B-Instruct`, `Mixtral-8x7B` * **Meta** – `Meta-LLaMA-3.1`, `LLaMA-3.3` * **Anthropic** – `Claude-3.5-Haiku` * **NVIDIA** – `Nemotron-70B` * **Google, xAI, Alibaba, Cohere, DeepSeek** – all supported through the unified `https://api.aimlapi.com/v1` endpoint. 📘 View [our full text (chat) model catalog](https://docs.aimlapi.com/api-references/text-models-llm#complete-text-model-list). *** ## ⚙️ Supported Parameters No AIMLAPI-specific parameter differences were found. Use standard OpenAI-compatible parameters: * `model` * `messages` * `temperature` * `max_tokens` * `stream` * `tools` (Toolhouse integration) *** ## 🧠 Supported Call Features
FeatureVia PythonVia TypeScript
Synchronous calls
Asynchronous use🟡 (manual)✅ (via Promises)
Tool Calling
Streaming
Threads
Local tools✅ via registerLocalTool()
--- # Source: https://docs.aimlapi.com/api-references/image-models/topaz-labs.md # Topaz Labs - [Sharpen](/api-references/image-models/topaz-labs/sharpen.md) - [Sharpen Generative](/api-references/image-models/topaz-labs/sharpen-generative.md) --- # Source: https://docs.aimlapi.com/api-references/3d-generating-models/stability-ai/triposr.md # triposr {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `triposr` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A transformer-based model designed for rapid 3D object reconstruction from a single RGB image, capable of generating high-quality 3D meshes in under 0.5 seconds on an NVIDIA A100 GPU. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["triposr"]},"image_url":{"type":"string","format":"uri","description":"The URL of the reference image."},"output_format":{"type":"string","enum":["glb","obj"],"default":"glb","description":"The format of the generated image."},"do_remove_background":{"type":"boolean","description":"Enables removing the background from the input image."},"foreground_ratio":{"type":"number","minimum":0.5,"maximum":1,"default":0.9,"description":"Ratio of the foreground image to the original image."},"mc_resolution":{"type":"integer","minimum":32,"maximum":1024,"default":256,"description":"Resolution of the marching cubes. Above 512 is not recommended."}},"required":["model","image_url"],"title":"triposr"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Example {% code overflow="wrap" %} ```python import requests def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "triposr", "image_url": "https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Fly_Agaric_mushroom_05.jpg/576px-Fly_Agaric_mushroom_05.jpg", }, ) response.raise_for_status() data = response.json() url = data["model_mesh"]["url"] file_name = data["model_mesh"]["file_name"] mesh_response = requests.get(url, stream=True) with open(file_name, "wb") as file: for chunk in mesh_response.iter_content(chunk_size=8192): file.write(chunk) if __name__ == "__main__": main() ``` {% endcode %} **Response**: The example returns a textured 3D mesh in GLB file format. You can view it [here](https://drive.google.com/file/d/1pfA6PGgDY31rEGcoea7qoZW6uhhPYSE6/view?usp=sharing). For clarity, we took several screenshots of our mushroom from different angles in an online GLB viewer. As you can see, the model understands the shape, but preserving the pattern on the back side (which was not visible on the reference image) could be improved:
Compare them with the [reference image](https://upload.wikimedia.org/wikipedia/commons/thumb/2/22/Fly_Agaric_mushroom_05.jpg/576px-Fly_Agaric_mushroom_05.jpg):
{% hint style="info" %} Try to choose reference images where the target object is not obstructed by other objects and does not blend into the background. Depending on the complexity of the object, you may need to experiment with the resolution of the reference image to achieve a satisfactory mesh. {% endhint %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/openai/tts-1-hd.md # TTS-1 HD {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `openai/tts-1-hd` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model is designed for high quality text-to-speech generation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["openai/tts-1-hd"]},"text":{"type":"string","minLength":1,"maxLength":4096,"description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["alloy","ash","ballad","coral","echo","fable","nova","onyx","sage","shimmer","verse"],"default":"alloy","description":"Name of the voice to be used."},"style":{"type":"string","description":"Determines the style exaggeration of the voice. This setting attempts to amplify the style of the original speaker. It does consume additional computational resources and might increase latency if set to anything other than 0."},"response_format":{"type":"string","enum":["mp3","opus","aac","flac","wav","pcm"],"default":"mp3","description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."},"speed":{"type":"number","minimum":0.25,"maximum":4,"default":1,"description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests # Insert your AI/ML API key instead of : api_key = "" base_url = "https://api.aimlapi.com/v1" headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json", } data = { "model": "openai/tts-1-hd", "text": "TTS-1 is a fast and powerful language model. Use it to convert text to natural sounding spoken text.", "voice": "coral", } response = requests.post(f"{base_url}/tts", headers=headers, json=data) response.raise_for_status() result = response.json() print("Audio URL:", result["audio"]["url"]) ``` {% endcode %} {% endtab %} {% tab title="JaveScript" %} {% code overflow="wrap" %} ```javascript import axios from "axios"; // Insert your AI/ML API key instead of : const apiKey = ""; const baseURL = "https://api.aimlapi.com/v1"; const headers = { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }; const data = { model: "openai/tts-1-hd", text: "TTS-1 is a fast and powerful language model. Use it to convert text to natural sounding spoken text.", voice: "coral", }; const main = async () => { const response = await axios.post(`${baseURL}/tts`, data, { headers }); console.log("Audio URL:", response.data.audio.url); }; main().catch(console.error); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` Audio URL: https://cdn.aimlapi.com/generations/hedgehog/1760948051400-99effd93-a38a-43d5-b4e4-76b42afb6e67.mp3 ``` {% endcode %}
Listen to the audio sample we generated: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/inworld/tts-1-max.md # inworld/tts-1-max {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `inworld/tts-1-max` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model is designed for realtime text-to-speech generation. A larger, more expressive variant of [inworld/tts-1](https://docs.aimlapi.com/api-references/speech-models/text-to-speech/inworld/tts-1). ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["inworld/tts-1-max"]},"text":{"type":"string","minLength":1,"maxLength":500000,"description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["Alex","Ashley","Craig","Deborah","Dennis","Dominus","Edward","Elizabeth","Hades","Heitor","Julia","Maitê","Mark","Olivia","Pixie","Priya","Ronald","Sarah","Shaun","Theodore","Timothy","Wendy"],"default":"Alex","description":"Name of the voice to be used."},"format":{"type":"string","enum":["wav","mp3"],"default":"mp3","description":"Audio output format. WAV delivers uncompressed audio in a widely supported container format, while MP3 provides good compression and compatibility."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests # Insert your AI/ML API key instead of : api_key = "" base_url = "https://api.aimlapi.com/v1" headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json", } data = { "model": "inworld/tts-1-max", "text": "It is a fast and powerful language model. Use it to convert text to natural sounding spoken text.", "voice": "Timothy", } response = requests.post(f"{base_url}/tts", headers=headers, json=data) response.raise_for_status() result = response.json() print("Audio URL:", result["audio"]["url"]) ``` {% endcode %} {% endtab %} {% tab title="JaveScript" %} {% code overflow="wrap" %} ```javascript import axios from "axios"; // Insert your AI/ML API key instead of : const apiKey = ""; const baseURL = "https://api.aimlapi.com/v1"; const headers = { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }; const data = { model: "inworld/tts-1-max", text: "It is a fast and powerful language model. Use it to convert text to natural sounding spoken text.", voice: "Timothy", }; const main = async () => { const response = await axios.post(`${baseURL}/tts`, data, { headers }); console.log("Audio URL:", response.data.audio.url); }; main().catch(console.error); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` Audio URL: https://cdn.aimlapi.com/generations/tts/inworld-tts-fc718c97-12b3-42dc-919c-518c48ace59a.mp3/1764327592881-89e9ea63-935c-42d0-b769-8290ad769b7c.mp3 ``` {% endcode %}
Listen to the audio sample we generated (\~ 3.2 s): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/openai/tts-1.md # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/inworld/tts-1.md # inworld/tts-1 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `inworld/tts-1` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model is designed for realtime text-to-speech generation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["inworld/tts-1","openai/tts-1"]},"text":{"type":"string","minLength":1,"maxLength":500000,"description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["Alex","Ashley","Craig","Deborah","Dennis","Dominus","Edward","Elizabeth","Hades","Heitor","Julia","Maitê","Mark","Olivia","Pixie","Priya","Ronald","Sarah","Shaun","Theodore","Timothy","Wendy"],"default":"Alex","description":"Name of the voice to be used."},"format":{"type":"string","enum":["wav","mp3"],"default":"mp3","description":"Audio output format. WAV delivers uncompressed audio in a widely supported container format, while MP3 provides good compression and compatibility."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests # Insert your AI/ML API key instead of : api_key = "" base_url = "https://api.aimlapi.com/v1" headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json", } data = { "model": "inworld/tts-1", "text": "It is a fast and powerful language model. Use it to convert text to natural sounding spoken text.", "voice": "Deborah", } response = requests.post(f"{base_url}/tts", headers=headers, json=data) response.raise_for_status() result = response.json() print("Audio URL:", result["audio"]["url"]) ``` {% endcode %} {% endtab %} {% tab title="JaveScript" %} {% code overflow="wrap" %} ```javascript import axios from "axios"; // Insert your AI/ML API key instead of : const apiKey = ""; const baseURL = "https://api.aimlapi.com/v1"; const headers = { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }; const data = { model: "inworld/tts-1", text: "It is a fast and powerful language model. Use it to convert text to natural sounding spoken text.", voice: "Deborah", }; const main = async () => { const response = await axios.post(`${baseURL}/tts`, data, { headers }); console.log("Audio URL:", response.data.audio.url); }; main().catch(console.error); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ``` Audio URL: https://cdn.aimlapi.com/generations/tts/inworld-tts-00f0415f-c7c5-4ee3-aa2f-7326a738bc87.mp3/1764327112283-f4bc65b8-415c-4388-9094-a5c80e7f7643.mp3 ``` {% endcode %}
Listen to the audio sample we generated (\~ 3 s): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/assembly-ai/universal.md # universal {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `aai/universal` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A new Speech-to-Text model offering exceptional accuracy by leveraging its deep understanding of context and semantics, with the broadest language support. {% hint style="success" %} This model use per-second billing. The cost of audio transcription is based on the number of seconds in the input audio file, not the processing time. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["aai/universal"]},"audio":{"type":"object","properties":{"buffer":{"nullable":true},"mimetype":{"type":"string"},"size":{"type":"integer"},"originalname":{"type":"string"},"encoding":{"type":"string"},"fieldname":{"type":"string"}},"required":["mimetype","originalname","encoding","fieldname"],"description":"The audio file to transcribe."},"audio_start_from":{"type":"integer","description":"The point in time, in milliseconds, in the file at which the transcription was started."},"audio_end_at":{"type":"integer","description":"The point in time, in milliseconds, in the file at which the transcription was terminated."},"language_code":{"type":"string","description":"The language of your audio file. Possible values are found in Supported Languages. The default value is 'en_us'."},"language_confidence_threshold":{"type":"number","minimum":0,"maximum":1,"description":"The confidence threshold for the automatically detected language. An error will be returned if the language confidence is below this threshold. Defaults to 0."},"language_detection":{"type":"boolean","description":"Enable Automatic language detection, either true or false. Available for universal model only."},"punctuate":{"type":"boolean","nullable":true,"default":null,"description":"Adds punctuation and capitalization to the transcript"},"format_text":{"type":"boolean","default":true,"description":"Enable Text Formatting, can be true or false."},"disfluencies":{"type":"boolean","default":false,"description":"Transcribe Filler Words, like \"umm\", in your media file; can be true or false."},"multichannel":{"type":"boolean","default":false,"description":"Enable Multichannel transcription, can be true or false."},"speaker_labels":{"type":"boolean","nullable":true,"default":null,"description":"Enable Speaker diarization, can be true or false."},"speakers_expected":{"type":"integer","nullable":true,"default":null,"description":"Tell the speaker label model how many speakers it should attempt to identify. See Speaker diarization for more details."},"content_safety":{"type":"boolean","default":false,"description":"Enable Content Moderation, can be true or false."},"iab_categories":{"type":"boolean","default":false,"description":"Enable Topic Detection, can be true or false."},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]},"description":"Customize how words are spelled and formatted using to and from values."},"auto_highlights":{"type":"boolean","default":false,"description":"Enable Key Phrases, either true or false."},"word_boost":{"type":"array","items":{"type":"string"},"description":"The list of custom vocabulary to boost transcription probability for."},"boost_param":{"type":"string","enum":["low","default","high"],"description":"How much to boost specified words. Allowed values: low, default, high."},"filter_profanity":{"type":"boolean","default":false,"description":"Filter profanity from the transcribed text, can be true or false."},"redact_pii":{"type":"boolean","default":false,"description":"Redact PII from the transcribed text using the Redact PII model, can be true or false."},"redact_pii_audio":{"type":"boolean","default":false,"description":"Generate a copy of the original media file with spoken PII \"beeped\" out, can be true or false. See PII redaction for more details."},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"],"description":"Controls the filetype of the audio created by redact_pii_audio. Currently supports mp3 (default) and wav. See PII redaction for more details."},"redact_pii_policies":{"type":"array","items":{"type":"string","enum":["account_number","banking_information","blood_type","credit_card_cvv","credit_card_expiration","credit_card_number","date","date_interval","date_of_birth","drivers_license","drug","duration","email_address","event","filename","gender_sexuality","healthcare_number","injury","ip_address","language","location","marital_status","medical_condition","medical_process","money_amount","nationality","number_sequence","occupation","organization","passport_number","password","person_age","person_name","phone_number","physical_attribute","political_affiliation","religion","statistics","time","url","us_social_security_number","username","vehicle_id","zodiac_sign"]},"description":"The list of PII Redaction policies to enable. See PII redaction for more details."},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"],"description":"The replacement logic for detected PII, can be `entity_type` or `hash`. See PII redaction for more details."},"sentiment_analysis":{"type":"boolean","default":false,"description":"Enable Sentiment Analysis, can be true or false."},"entity_detection":{"type":"boolean","default":false,"description":"Enable Entity Detection, can be true or false."},"summarization":{"type":"boolean","default":false,"description":"Enable Summarization, can be true or false."},"summary_model":{"type":"string","enum":["informative","conversational","catchy"],"description":"The model to summarize the transcript. Allowed values: informative, conversational, catchy."},"summary_type":{"type":"string","enum":["bullets","bullets_verbose","gist","headline","paragraph"],"description":"The type of summary. Allowed values: bullets, bullets_verbose, gist, headline, paragraph."},"auto_chapters":{"type":"boolean","default":false,"description":"Enable Auto Chapters, either true or false."},"speech_threshold":{"type":"number","minimum":0,"maximum":1,"description":"Reject audio files that contain less than this fraction of speech. Valid values are in the range [0, 1] inclusive."}},"required":["model","audio"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Quick Example: Processing a Speech Audio File via URL Let's transcribe the following audio fragment: {% embed url="" %} {% code overflow="wrap" %} ```python import time import requests import json # for getting a structured output with indentation base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "aai/universal", "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]["text"]) # Uncomment the line below to print the entire "result" object with all service data # print("Processing complete:\n", json.dumps(response_data["result"], indent=2, ensure_ascii=False)) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ```json5 {'generation_id': '0cff4e24-c1ba-419d-8b62-46f342985881'} Still waiting... Checking again in 10 seconds. Processing complete:\n { "id": "04d07a4c-9238-4860-ac6f-534d58fdaf9a", "language_model": "assemblyai_default", "acoustic_model": "assemblyai_default", "language_code": "en_us", "status": "completed", "audio_url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3", "text": "He doesn't belong to you. And I don't see how you have anything to do with what is be his power yet his he presumably that from this stage to you be fired.", "words": [ { "text": "He", "start": 400, "end": 520, "confidence": 0.98876953, "speaker": null }, { "text": "doesn't", "start": 520, "end": 880, "confidence": 0.9296875, "speaker": null }, { "text": "belong", "start": 880, "end": 1320, "confidence": 1, "speaker": null }, { "text": "to", "start": 1320, "end": 1560, "confidence": 0.99853516, "speaker": null }, { "text": "you.", "start": 1560, "end": 1840, "confidence": 0.99853516, "speaker": null }, { "text": "And", "start": 1840, "end": 2120, "confidence": 0.99365234, "speaker": null }, { "text": "I", "start": 2120, "end": 2280, "confidence": 0.99902344, "speaker": null }, { "text": "don't", "start": 2280, "end": 2520, "confidence": 0.9949544, "speaker": null }, { "text": "see", "start": 2520, "end": 2720, "confidence": 0.99902344, "speaker": null }, { "text": "how", "start": 2720, "end": 3000, "confidence": 0.99902344, "speaker": null }, { "text": "you", "start": 3000, "end": 3320, "confidence": 0.99853516, "speaker": null }, { "text": "have", "start": 3320, "end": 3600, "confidence": 0.99658203, "speaker": null }, { "text": "anything", "start": 3600, "end": 4080, "confidence": 0.9968262, "speaker": null }, { "text": "to", "start": 4080, "end": 4240, "confidence": 0.99902344, "speaker": null }, { "text": "do", "start": 4240, "end": 4360, "confidence": 0.99902344, "speaker": null }, { "text": "with", "start": 4360, "end": 4520, "confidence": 0.9902344, "speaker": null }, { "text": "what", "start": 4520, "end": 4720, "confidence": 0.9941406, "speaker": null }, { "text": "is", "start": 4720, "end": 4920, "confidence": 0.9819336, "speaker": null }, { "text": "be", "start": 4920, "end": 5080, "confidence": 0.8720703, "speaker": null }, { "text": "his", "start": 5080, "end": 5280, "confidence": 0.9951172, "speaker": null }, { "text": "power", "start": 5280, "end": 5520, "confidence": 0.8588867, "speaker": null }, { "text": "yet", "start": 5520, "end": 5840, "confidence": 0.5756836, "speaker": null }, { "text": "his", "start": 5840, "end": 6160, "confidence": 0.5419922, "speaker": null }, { "text": "he", "start": 6160, "end": 6360, "confidence": 0.96972656, "speaker": null }, { "text": "presumably", "start": 6360, "end": 6840, "confidence": 0.5012207, "speaker": null }, { "text": "that", "start": 6840, "end": 7000, "confidence": 0.8901367, "speaker": null }, { "text": "from", "start": 7000, "end": 7160, "confidence": 0.9951172, "speaker": null }, { "text": "this", "start": 7160, "end": 7320, "confidence": 0.9926758, "speaker": null }, { "text": "stage", "start": 7320, "end": 7680, "confidence": 0.9953613, "speaker": null }, { "text": "to", "start": 7680, "end": 7960, "confidence": 0.9941406, "speaker": null }, { "text": "you", "start": 7960, "end": 8320, "confidence": 0.9975586, "speaker": null }, { "text": "be", "start": 9440, "end": 9720, "confidence": 0.4555664, "speaker": null }, { "text": "fired.", "start": 9720, "end": 10050, "confidence": 0.4534912, "speaker": null } ], "utterances": null, "confidence": 0.90746206, "audio_duration": 11, "punctuate": true, "format_text": true, "dual_channel": null, "webhook_url": null, "webhook_status_code": null, "webhook_auth": false, "webhook_auth_header_name": null, "speed_boost": false, "auto_highlights_result": null, "auto_highlights": false, "audio_start_from": null, "audio_end_at": null, "word_boost": [], "boost_param": null, "prompt": null, "keyterms_prompt": [], "filter_profanity": false, "redact_pii": false, "redact_pii_audio": false, "redact_pii_audio_quality": null, "redact_pii_audio_options": null, "redact_pii_policies": null, "redact_pii_sub": null, "speaker_labels": false, "speaker_options": null, "content_safety": false, "iab_categories": false, "content_safety_labels": { "status": "unavailable", "results": [], "summary": {} }, "iab_categories_result": { "status": "unavailable", "results": [], "summary": {} }, "language_detection": false, "language_detection_options": null, "language_confidence_threshold": null, "language_confidence": null, "custom_spelling": null, "throttled": false, "auto_chapters": false, "summarization": false, "summary_type": null, "summary_model": null, "custom_topics": false, "topics": [], "speech_threshold": null, "speech_model": "universal", "chapters": null, "disfluencies": false, "entity_detection": false, "sentiment_analysis": false, "sentiment_analysis_results": null, "entities": null, "speakers_expected": null, "summary": null, "custom_topics_results": null, "is_deleted": null, "multichannel": null, "project_id": 675898, "token_id": 1245789 } ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/image-models/bytedance/uso.md # USO (Image-to-Image) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `bytedance/uso` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview USO (Unified Style-Subject Optimized) — a single model that seamlessly combines style-based and subject-based image generation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["bytedance/uso"]},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":3,"description":"An array of up to 3 image URLs. The first image is always treated as the primary input for image-to-image generation, while the remaining images (if provided) serve as visual style references for the output."},"image_size":{"anyOf":[{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"]},{"type":"object","properties":{"width":{"type":"number"},"height":{"type":"number"}},"required":["width","height"]}],"default":"square_hd","description":"The size of the generated image."},"negative_prompt":{"type":"string","default":"","description":"The description of elements to avoid in the generated image."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":50,"default":28,"description":"The number of inference steps to perform."},"guidance_scale":{"type":"number","minimum":1,"maximum":20,"default":4,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt when looking for a related image to show you."},"keep_size":{"type":"boolean"},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"sync_mode":{"type":"boolean","default":false,"description":"If set to true, the function will wait for the image to be generated and uploaded before returning the response. This will increase the latency of the function but it allows you to get the image directly in the response without going through the CDN."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"output_format":{"type":"string","enum":["jpeg","png"],"default":"png","description":"The format of the generated image."},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."}},"required":["model","image_urls","prompt"],"title":"bytedance/uso"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization":"Bearer ", "Content-Type":"application/json", }, json={ "model":"bytedance/uso", "prompt": "The T-Rex is wearing a business suit, sitting in a cozy small café, drinking from a mug. Blur the background slightly to create a bokeh effect.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png" ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'bytedance/uso', prompt: 'The T-Rex is wearing a business suit, sitting in a cozy small café, drinking from a mug. Blur the background slightly to create a bokeh effect.', image_urls: [ 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png' ], }), }); const data = await response.json(); console.log(JSON.stringify(data, null, 2)); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "images": [ { "url": "https://cdn.aimlapi.com/eagle/files/penguin/sMMWnB7wyBK8o_XiAohle.png", "content_type": "image/png", "file_name": null, "file_size": null, "width": 1024, "height": 1024 } ], "seed": 351168504, "has_nsfw_concepts": [ false ], "prompt": "The T-Rex is wearing a business suit, sitting in a cozy small café, drinking from a mug. Blur the background slightly to create a bokeh effect.", "timings": { "inference": 10.547778039996047 }, "data": [ { "url": "https://cdn.aimlapi.com/eagle/files/penguin/sMMWnB7wyBK8o_XiAohle.png", "content_type": "image/png", "file_name": null, "file_size": null, "width": 1024, "height": 1024 } ], "meta": { "usage": { "tokens_used": 420000 } } } ``` {% endcode %}
Reference ImageGenerated Image

(original)

"The T-Rex is wearing a business suit, sitting in a cozy small café, drinking from a mug. Blur the background slightly to create a bokeh effect."

--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1-pro-image-to-video.md # v1-pro/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1/pro/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model transforms static images into dynamic video clips. Offers more advanced camera controls than [v1 Standard](https://docs.aimlapi.com/api-references/video-models/kling-ai/v1-standard-image-to-video) model, including options for tilt, pan, zoom, and roll movements. Results in richer details and more stable camera movements, enhancing the overall visual quality of the generated videos. Enhanced animations make elements like water flow and character movements appear more natural and engaging. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`. \ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server {% hint style="info" %} The aspect ratio of the generated video is solely determined by the aspect ratio of the input reference image. {% endhint %} ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1/pro/image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"tail_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"static_mask":{"type":"string","format":"uri","description":"URL of the image for Static Brush Application Area (Mask image created by users using the motion brush)."},"dynamic_masks":{"type":"array","items":{"type":"object","properties":{"mask":{"type":"string"},"trajectories":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer"},"y":{"type":"integer"}},"required":["x","y"]},"minItems":2,"maxItems":77}},"required":["mask","trajectories"]},"maxItems":6,"description":"List of dynamic masks."},"camera_control":{"type":"object","properties":{"type":{"type":"string","enum":["simple","down_back","forward_up","right_turn_forward","left_turn_forward"]},"config":{"type":"object","properties":{"horizontal":{"type":"number","minimum":-10,"maximum":10},"vertical":{"type":"number","minimum":-10,"maximum":10},"pan":{"type":"number","minimum":-10,"maximum":10},"tilt":{"type":"number","minimum":-10,"maximum":10},"roll":{"type":"number","minimum":-10,"maximum":10},"zoom":{"type":"number","minimum":-10,"maximum":10}}}},"description":"Camera control parameters."}},"required":["model","image_url"],"title":"kling-video/v1/pro/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1/pro/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "kling-video/v1/pro/image-to-video", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: gg6lldgKC2tRmvJwBjK7- Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'gg6lldgKC2tRmvJwBjK7-', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/kangaroo/bs2/upload-ylab-stunt-sgp/muse/784256485483880450/VIDEO/20260122/d40612cb5e896d7522b879ef1df858d9-a9d6cd03-e7c7-436f-a5ea-8e4a0436905a.mp4?cacheKey=ChtzZWN1cml0eS5rbGluZy5tZXRhX2VuY3J5cHQSsAHb-cd6E1sNt1khQt00KZDlKKywMoWCcZZpN20KDMWKJ2KpOPIvyEvdZGxo6Pq2ZdfroGNpC8qzkfsQ5NKoUYw3eMISmzkZUGfsxQoGf1s62zOVo8DgLcWTcSVjBrPWDiINCRcRfPLhDcpBBo0nxg5Y4okHO7NlLCxkO5GEezRFVR_DKVVo2STvgP67QWtHz3CzfT8rUhcGCYs1XkQTMJhUxaBjWmmBYRx3p4wutoKr3hoSYF8ADrg3uvuiuzhTSVrQUvw0IiAsCh1HKrgcKo6_JboDCYiKlWjo6KJpZr1TjTMcg3vWAigFMAE&x-kcdn-pid=112781&ksSecret=0de731a5285509eb9c9eb89da6e10bd0&ksTime=6999d288'}} ``` {% endcode %}
**Processing time**: \~ 3 min 24 sec. **Generated video** (1280x720, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1-pro-text-to-video.md # v1-pro/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1/pro/text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model converts textual descriptions into high-quality video content. Provides advanced camera control options, including more sophisticated movements and stabilization. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1/pro/text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"camera_control":{"type":"string","enum":["down_back","forward_up","right_turn_forward","left_turn_forward"],"description":"Camera control parameters."},"advanced_camera_control":{"type":"object","properties":{"movement_type":{"type":"string","enum":["horizontal","vertical","pan","tilt","roll","zoom"],"description":"The type of camera movement."},"movement_value":{"type":"integer","minimum":-10,"maximum":10,"description":"The value of the camera movement."}},"required":["movement_type","movement_value"],"description":"Advanced camera control parameters."}},"required":["model","prompt"],"title":"kling-video/v1/pro/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1/pro/text-to-video", "prompt": "A cheerful white raccoon running through a sequoia forest", "aspect_ratio": "16:9", "duration": "5" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: 'kling-video/v1/pro/text-to-video', prompt: ` A cheerful white raccoon running through a sequoia forest. `, duration: 5, aspect_ratio: '16:9' }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: QC1_XDQc6Hx-p3RUcSF5g Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: completed Processing complete:/n {'id': 'QC1_XDQc6Hx-p3RUcSF5g', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/kangaroo/bs2/upload-ylab-stunt-sgp/muse/784256485483880450/VIDEO/20260120/b7d828b04fd5625e8a8ae7360670e018-4ca2d980-1059-4eef-8d82-cc5d3def46d4.mp4?cacheKey=ChtzZWN1cml0eS5rbGluZy5tZXRhX2VuY3J5cHQSsAE-a6iRQ48yHd6Bcb50UbOW-oeHGqbPkx-HoTw1iG58yL7vOo_gG3WdPFMNfRwFw2I_41kVhxPKYKQEK_-V1Uj8wTVcMpQavNs7pDSjkJBf9rQCHydicujHHlvwootTjCzj7xx1b0SNvQQ_PfZ9Bie-UrNpBh_z9wrOqlH_MqjEDRwPncQhx9XnWtmsw9zc0VI-eVA917AmUxwU0RoVwu_KNfyjuTVqE2f6KvMkH-0GJxoSNzwANz7xQHQ1n3QnpfpV26ryIiCyKnPpFgfHyhGr_h2uPvnI8Bm1wPo5GKpbkShd7np9wigFMAE&x-kcdn-pid=112781&ksSecret=c2fe392eda314f14a7c53c6535802902&ksTime=6997009f'}} ``` {% endcode %}
**Processing time**: \~ 1 min 18 sec. **Generated video** (1280x720, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1-standard-image-to-video.md # v1-standard/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1/standard/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A model transforms static images into dynamic video clips. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`. \ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1/standard/image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"tail_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"static_mask":{"type":"string","format":"uri","description":"URL of the image for Static Brush Application Area (Mask image created by users using the motion brush)."},"dynamic_masks":{"type":"array","items":{"type":"object","properties":{"mask":{"type":"string"},"trajectories":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer"},"y":{"type":"integer"}},"required":["x","y"]},"minItems":2,"maxItems":77}},"required":["mask","trajectories"]},"maxItems":6,"description":"List of dynamic masks."},"camera_control":{"type":"object","properties":{"type":{"type":"string","enum":["simple","down_back","forward_up","right_turn_forward","left_turn_forward"]},"config":{"type":"object","properties":{"horizontal":{"type":"number","minimum":-10,"maximum":10},"vertical":{"type":"number","minimum":-10,"maximum":10},"pan":{"type":"number","minimum":-10,"maximum":10},"tilt":{"type":"number","minimum":-10,"maximum":10},"roll":{"type":"number","minimum":-10,"maximum":10},"zoom":{"type":"number","minimum":-10,"maximum":10}}}},"description":"Camera control parameters."}},"required":["model","image_url"],"title":"kling-video/v1/standard/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1/standard/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "kling-video/v1/standard/image-to-video", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: nJ8Xcj0YCh8jZL1noqiZH Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'nJ8Xcj0YCh8jZL1noqiZH', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/kangaroo/bs2/upload-ylab-stunt-sgp/muse/784256485483880450/VIDEO/20260120/65936dd58d920424e4eb9c63ced58d91-95043990-9089-4f29-964f-bdb9dc8613ef.mp4?cacheKey=ChtzZWN1cml0eS5rbGluZy5tZXRhX2VuY3J5cHQSsAGF3eZphj1FFPB8b_FDXynExDTd0HvbX2EVjv4yP_Gmh8VWD9o5tDZwQTgxGhTON39FMvEafOs-MIqntimFHNbc87q1kSLAvr7i2unqGZPUcOSe1_QHuohz1ziHRpgZS5QJBgyVWcTO1O7rzPEBmcuVq2KAWv1-Hdtf2hsKUWGpM_ND2uqLgtOO3TSOxUW4L0sfxdTBkCzRgtGT8R-PlMk-18wbhrdtdjdDZ9G2KMw1jhoSS2Y9drB8Z4ednHxTIh7XZcnaIiBz78YUdtCCF-Oy9Z_9Dffy3JHkkjqHh7CM6cBjju3sJCgFMAE&x-kcdn-pid=112781&ksSecret=e2572bf52259a55921fce5697719d027&ksTime=6996dc99'}} ``` {% endcode %}
**Processing time**: \~4 min 9 sec. **Original**: [832x1216](https://drive.google.com/file/d/1I4yUQanF_g_UppGrN188Zl0unxa5SG8i/view?usp=sharing) (without sound) **Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1-standard-text-to-video.md # v1-standard/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1/standard/text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model converts textual descriptions into high-quality video content. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1/standard/text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"camera_control":{"type":"string","enum":["down_back","forward_up","right_turn_forward","left_turn_forward"],"description":"Camera control parameters."},"advanced_camera_control":{"type":"object","properties":{"movement_type":{"type":"string","enum":["horizontal","vertical","pan","tilt","roll","zoom"],"description":"The type of camera movement."},"movement_value":{"type":"integer","minimum":-10,"maximum":10,"description":"The value of the camera movement."}},"required":["movement_type","movement_value"],"description":"Advanced camera control parameters."}},"required":["model","prompt"],"title":"kling-video/v1/standard/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1/standard/text-to-video", "prompt": "A cheerful white raccoon running through a sequoia forest", "aspect_ratio": "16:9", "duration": "5" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: 'kling-video/v1/standard/text-to-video', prompt: ` A cheerful white raccoon running through a sequoia forest. `, duration: 5, aspect_ratio: '16:9' }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: U-Q3R_yVPfHqiAQlaWdss Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'U-Q3R_yVPfHqiAQlaWdss', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/kangaroo/bs2/upload-ylab-stunt-sgp/muse/784256485483880450/VIDEO/20260122/4c3f80428e9cfc90573418b5a14aae18-4be7549a-04b3-461a-8a06-2e3e147be93a.mp4?cacheKey=ChtzZWN1cml0eS5rbGluZy5tZXRhX2VuY3J5cHQSsAH7JuE3zOoU6y4-isYiLG3XQhIZiaBLnGLFVCbaRZT5oEbfW69KqY8-00jhV_r3ymLKxpT7bpTtlO0Z6hyLGfocRGkW46J3KDFhsUwTaqEmujGUVTgNHQIWdhuWIglyTqYlrM4dVIvbefjHwFX2eWtCEYFpa14-QdfAzhEuPR3S4_TvnvZCsyzJKDIu0NN0A6szuf-X_32wYAaon6BCdTTyNTCm35mviMUmR8EsM46vlxoSj4dfrTYhUzYe6jlYP2ZR__D3IiDovwC4GmH8pWJTadrFYzjrOSbFO1lzTFMwjFWhe1MR5igFMAE&x-kcdn-pid=112781&ksSecret=df68c4a8165cb6f008302e4bc12e49ae&ksTime=6999cdc9'}} ``` {% endcode %}
**Processing time**: \~ 5 min 22 sec. **Generated video** (1280x720, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-effects.md # v1.6-pro/effects {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/kling-video-v1.6-pro-effects` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A specialized video model that generates short clips based on reference images of people, applying one of several preset scenarios: two people hugging, kissing, or making a heart shape with their hands (requires **2** reference images), or a single person being humorously squished like clay or inflated like a balloon (requires **1** reference image).
Generated Video Examples (low-res GIF previews)
Photo #1Photo #2"effect_scene": "heart_gesture"
| "effect_scene": "hug" | "effect_scene": "kiss" | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | "effect_scene": "squish" | "effect_scene": "expansion" | Theoretically, you can get acceptable results for the `squish` and `expansion` effects even when using photos of animals, inanimate objects, or even landscapes. The output can be unusual, but the video will be generated.
"effect_scene": "squish""effect_scene": "squish"
However, if you try to use such photos with the `hug`, `kiss`, or `heart_gesture effects`, you’ll receive an error saying `“Could not detect face in the image”` . {% code overflow="wrap" %} ```json5 Processing complete:\n {'id': '50f3e8ae-3d88-482f-95ea-7faa4799f60f:kling-video/v1.6/pro/effects', 'status': 'error', 'error': {'detail': [{'loc': ['body', 'image_url'], 'msg': 'Could not detect face in the image', 'type': 'face_detection_error', 'input': 'https://rgo.ru/upload/s34web.imageadapter/668e00e0ed33855a9c79de12d2f88206/2131465.jpg'}]}} ``` {% endcode %}
## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/kling-video-v1.6-pro-effects"]},"image_url":{"anyOf":[{"type":"string","format":"uri"},{"type":"array","items":{"type":"string","format":"uri"}}],"description":"For hug, kiss, and heart_gesture effects, pass an array containing exactly two image URLs. For squish or expansion, only one image URL is required."},"effect_scene":{"type":"string","enum":["magic_fireball","pet_moto_rider","media_interview","pet_lion","pet_delivery","pet_chef","santa_gifts","santa_hug","girlfriend","boyfriend","heart_gesture_1","pet_wizard","smoke_smoke","thumbs_up","instant_kid","dollar_rain","cry_cry","building_collapse","gun_shot","mushroom","double_gun","pet_warrior","lightning_power","jesus_hug","shark_alert","long_hair","lie_flat","polar_bear_hug","brown_bear_hug","jazz_jazz","office_escape_plow","fly_fly","watermelon_bomb","pet_dance","boss_coming","wool_curly","iron_warrior","pet_bee","marry_me","swing_swing","day_to_night","piggy_morph","wig_out","car_explosion","ski_ski","tiger_hug","siblings","construction_worker","let's_ride","snatched","magic_broom","felt_felt","jumpdrop","celebration","splashsplash","hula","surfsurf","fairy_wing","angel_wing","dark_wing","skateskate","plushcut","jelly_press","jelly_slice","jelly_squish","jelly_jiggle","pixelpixel","yearbook","instant_film","anime_figure","rocketrocket","bloombloom","dizzydizzy","fuzzyfuzzy","squish","expansion","hug","kiss","heart_gesture","fight"],"description":"Video effect scene type"},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"}},"required":["model","image_url","effect_scene"],"title":"klingai/kling-video-v1.6-pro-effects"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example In this example, we'll try to make two people from different photos (provided via URLs) form a romantic heart shape with their hands. No prompt is required — the effect is selected by setting the appropriate value for the `effect_scene` parameter. {% hint style="info" %} For effects that involve two images, the first one will always appear on the left side of the video, and the second one on the right. Therefore, to achieve the most natural-looking result, you may sometimes need to swap the image order. {% endhint %} {% hint style="info" %} In videos featuring a single person (the `squish` and `expansion` effects), an audio track is also generated — a mix of music and material interaction sounds, such as rubber squeaks and similar effects. {% endhint %}
Input image preview
Photo #1Photo #2
The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time base_url = "https://api.aimlapi.com/v2" api_key = "" ref_img_url1 = "https://images.pexels.com/photos/733872/pexels-photo-733872.jpeg" ref_img_url2 = "https://storage.googleapis.com/falserverless/juggernaut_examples/QEW5VrzccxGva7mPfEXjf.png" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/kling-video-v1.6-pro-effects", "image_url": [ref_img_url1, ref_img_url2], "duration": 5, "effect_scene": "heart_gesture" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: 91bc5296-f0f9-4336-ab6a-60f993cbb971:kling-video/v1.6/pro/effects Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '91bc5296-f0f9-4336-ab6a-60f993cbb971:kling-video/v1.6/pro/effects', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/koala/cSlcOq_yBcuGDvbf9BjGU_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 8554108}} ``` {% endcode %}
Generated Video (GIF Preview)
--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-image-to-video.md # v1.6-pro/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1.6/pro/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced AI video generation model, designed to create high-quality videos from image. This version introduces significant improvements in visual quality and dynamic action rendering, enabling users to generate more consistent and visually appealing results compared to its predecessor, Kling 1.5. Incorporates natural camera movements and transitions for more cinematic outputs. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`. \ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server {% hint style="info" %} The aspect ratio of the generated video is solely determined by the aspect ratio of the input reference image. {% endhint %} ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1.6/pro/image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"tail_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"static_mask":{"type":"string","format":"uri","description":"URL of the image for Static Brush Application Area (Mask image created by users using the motion brush)."},"dynamic_masks":{"type":"array","items":{"type":"object","properties":{"mask":{"type":"string"},"trajectories":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer"},"y":{"type":"integer"}},"required":["x","y"]},"minItems":2,"maxItems":77}},"required":["mask","trajectories"]},"maxItems":6,"description":"List of dynamic masks."},"camera_control":{"type":"object","properties":{"type":{"type":"string","enum":["simple","down_back","forward_up","right_turn_forward","left_turn_forward"]},"config":{"type":"object","properties":{"horizontal":{"type":"number","minimum":-10,"maximum":10},"vertical":{"type":"number","minimum":-10,"maximum":10},"pan":{"type":"number","minimum":-10,"maximum":10},"tilt":{"type":"number","minimum":-10,"maximum":10},"roll":{"type":"number","minimum":-10,"maximum":10},"zoom":{"type":"number","minimum":-10,"maximum":10}}}},"description":"Camera control parameters."}},"required":["model","image_url"],"title":"kling-video/v1.6/pro/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1.6/pro/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "kling-video/v1.6/pro/image-to-video", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: bA23_2Dk4Yh6RWT5BluZu Status: queued. Checking again in 15 seconds. Status: queued. Checking again in 15 seconds. Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'bA23_2Dk4Yh6RWT5BluZu', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/kangaroo/bs2/upload-ylab-stunt-sgp/muse/784256485483880450/VIDEO/20260120/43c9d314f746b1ab3d4434b060fe6298-fba0f87f-d068-4c8f-ae2f-8ff4f5b8080e.mp4?cacheKey=ChtzZWN1cml0eS5rbGluZy5tZXRhX2VuY3J5cHQSsAH2nUCMQjth1fk--WxsdoHukqXCCmEJ2-NieFSq30bXT6sTr30KeRz1SQ6Y15nxtZWs8zQKstlEd3U2z9Ss6mpELhZsZ0lza013rFfOo_675Gr4e8QWwFFgrY7OCUXQZatF9WjvSDH1CLFDipXHbv64SqLc9q5mSEw-qyGgJTaN_S1158P1NCY9bHmK6Gogc6nMc3Xo9kWjiXSkeml0Lp4r-9Ri9qkWCKQ1DO4LH9asgBoSBJLSFVs3h7NB0bSL9MQcdiMhIiAyjq47-559YUzlexCIAezfS1whE0XMLDbogSc3o1nUcigFMAE&x-kcdn-pid=112781&ksSecret=dd6b736880701b6e341f773dc1b2a969&ksTime=6996fedd'}} ``` {% endcode %}
**Processing time**: \~ 3 min 6 sec. **Generated video** (1920x1080, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-text-to-video.md # v1.6-pro/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1.6/pro/text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced AI video generation model, designed to create high-quality videos from text prompts and images. This version introduces significant improvements in prompt adherence, visual quality, and dynamic action rendering, enabling users to generate more consistent and visually appealing results compared to its predecessor, Kling 1.5. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1.6/pro/text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"camera_control":{"type":"string","enum":["down_back","forward_up","right_turn_forward","left_turn_forward"],"description":"Camera control parameters."},"advanced_camera_control":{"type":"object","properties":{"movement_type":{"type":"string","enum":["horizontal","vertical","pan","tilt","roll","zoom"],"description":"The type of camera movement."},"movement_value":{"type":"integer","minimum":-10,"maximum":10,"description":"The value of the camera movement."}},"required":["movement_type","movement_value"],"description":"Advanced camera control parameters."}},"required":["model","prompt"],"title":"kling-video/v1.6/pro/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1.6/pro/text-to-video", "prompt": "A cheerful white raccoon running through a sequoia forest", "aspect_ratio": "16:9", "duration": "5" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: 'kling-video/v1.6/pro/text-to-video', prompt: ` A cheerful white raccoon running through a sequoia forest. `, duration: 5, aspect_ratio: '16:9' }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: zieO13nuA445by0d1Ozvq Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'zieO13nuA445by0d1Ozvq', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/kangaroo/bs2/upload-ylab-stunt-sgp/muse/784256485483880450/VIDEO/20260120/cf1fe8df4232d91c347e6324f0c0e52a-1f2d380d-071c-4f30-807c-71b7cbb76b6a.mp4?cacheKey=ChtzZWN1cml0eS5rbGluZy5tZXRhX2VuY3J5cHQSsAGuHlZbIrKjbYAITkHpENp385OAl2xGQQDjNoCpgUmmX7-QrYVWyQDEBMT1qNrUO_dWGi1fnmM5OFRW1OVDMvRZcnFc56CGK1PzX0WL2JabxNp6UpA5sMSHU2NDv_hgsArQdGxto7rV6cMcTZK-y0qTtORtzcSQ2h2DeASDqGWkBx3J3RvnlnNnWFyPCTy9QKMit0nFpWAtZctha2jruP-Zcg49XTWiXBwaeQjHK3eiExoSbgFFnrpiqTZzapT733K7iU3CIiD0m_2hVRzwc9d2KfAcuXwCcdmnCStMvvxaBTCdzkVBCigFMAE&x-kcdn-pid=112781&ksSecret=f77ba857f39f42f0c6752a4885f2379d&ksTime=6996fff4'}} ``` {% endcode %}
**Processing time**: \~ 3 min 6 sec. **Generated video** (1920x1080, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-standard-effects.md # v1.6-standard/effects {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/kling-video-v1.6-standard-effects` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A specialized video model that generates short clips based on reference images of people, applying one of several preset scenarios: two people hugging, kissing, or making a heart shape with their hands (requires **2** reference images), or a single person being humorously squished like clay or inflated like a balloon (requires **1** reference image).
Generated Video Examples
Photo #1Photo #2"effect_scene": "heart_gesture"
| "effect_scene": "hug" | "effect_scene": "kiss" | | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | "effect_scene": "squish" | "effect_scene": "expansion" | *Theoretically*, you can get acceptable results for the `squish` and `expansion` effects even when using photos of animals, inanimate objects, or even landscapes. The output can be unusual, but the video will be generated.
"effect_scene": "squish""effect_scene": "squish"
However, if you try to use such photos with the `hug`, `kiss`, or `heart_gesture effects`, you’ll receive an error saying `“Could not detect face in the image”` . {% code overflow="wrap" %} ```json5 Processing complete:\n {'id': '50f3e8ae-3d88-482f-95ea-7faa4799f60f:kling-video/v1.6/pro/effects', 'status': 'error', 'error': {'detail': [{'loc': ['body', 'image_url'], 'msg': 'Could not detect face in the image', 'type': 'face_detection_error', 'input': 'https://rgo.ru/upload/s34web.imageadapter/668e00e0ed33855a9c79de12d2f88206/2131465.jpg'}]}} ``` {% endcode %}
## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/kling-video-v1.6-standard-effects"]},"image_url":{"anyOf":[{"type":"string","format":"uri"},{"type":"array","items":{"type":"string","format":"uri"}}],"description":"For hug, kiss, and heart_gesture effects, pass an array containing exactly two image URLs. For squish or expansion, only one image URL is required."},"effect_scene":{"type":"string","enum":["magic_fireball","pet_moto_rider","media_interview","pet_lion","pet_delivery","pet_chef","santa_gifts","santa_hug","girlfriend","boyfriend","heart_gesture_1","pet_wizard","smoke_smoke","thumbs_up","instant_kid","dollar_rain","cry_cry","building_collapse","gun_shot","mushroom","double_gun","pet_warrior","lightning_power","jesus_hug","shark_alert","long_hair","lie_flat","polar_bear_hug","brown_bear_hug","jazz_jazz","office_escape_plow","fly_fly","watermelon_bomb","pet_dance","boss_coming","wool_curly","iron_warrior","pet_bee","marry_me","swing_swing","day_to_night","piggy_morph","wig_out","car_explosion","ski_ski","tiger_hug","siblings","construction_worker","let's_ride","snatched","magic_broom","felt_felt","jumpdrop","celebration","splashsplash","hula","surfsurf","fairy_wing","angel_wing","dark_wing","skateskate","plushcut","jelly_press","jelly_slice","jelly_squish","jelly_jiggle","pixelpixel","yearbook","instant_film","anime_figure","rocketrocket","bloombloom","dizzydizzy","fuzzyfuzzy","squish","expansion","hug","kiss","heart_gesture","fight"],"description":"Video effect scene type"},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"}},"required":["model","image_url","effect_scene"],"title":"klingai/kling-video-v1.6-standard-effects"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example In this example, we'll try to make two people from different photos (provided via URLs) form a romantic heart shape with their hands. No prompt is required — the effect is selected by setting the appropriate value for the `effect_scene` parameter. {% hint style="info" %} For effects that involve two images, the first one will always appear on the left side of the video, and the second one on the right. Therefore, to achieve the most natural-looking result, you may sometimes need to swap the image order. {% endhint %} {% hint style="info" %} In videos featuring a single person (the `squish` and `expansion` effects), an audio track is also generated — a mix of music and material interaction sounds, such as rubber squeaks and similar effects. {% endhint %}
Input images (preview)
Photo #1Photo #2
The code below creates a video generation task, then automatically polls the server every 10 seconds until it finally receives the video URL. The average generation time is approximately 2 minutes. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time base_url = "https://api.aimlapi.com/v2" api_key = "" ref_img_url1 = "https://images.pexels.com/photos/733872/pexels-photo-733872.jpeg" ref_img_url2 = "https://storage.googleapis.com/falserverless/juggernaut_examples/QEW5VrzccxGva7mPfEXjf.png" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/kling-video-v1.6-standard-effects", "image_url": [ref_img_url1, ref_img_url2], "duration": 5, "effect_scene": "kiss" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() #print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Gen_ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: aa8d6bf3-9b6c-4d0c-a9bc-898644b2594d:kling-video/v1.6/standard/effects Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'aa8d6bf3-9b6c-4d0c-a9bc-898644b2594d:kling-video/v1.6/standard/effects', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/monkey/yKPuDR3Korzpyh14ZKPms_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 5375765}} ``` {% endcode %}
Generated Video (GIF Preview)
--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-standard-multi-image-to-video.md # v1.6-standard/multi-image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1.6/standard/multi-image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model creates dynamic videos from multiple input images with enhanced temporal consistency and natural transitions. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`. \ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1.6/standard/multi-image-to-video"]},"image_list":{"type":"array","items":{"type":"string","format":"uri"},"minItems":2,"maxItems":4,"description":"Array of image URLs for multi-image-to-video generation"},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."}},"required":["model","image_list"],"title":"kling-video/v1.6/standard/multi-image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1.6/standard/multi-image-to-video", "prompt": "A graceful ballerina dancing outside a circus tent on green grass, with colorful wildflowers swaying around her as she twirls and poses in the meadow.", "image_list":[ "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-1.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-2.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-3.png" ], "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "kling-video/v1.6/standard/multi-image-to-video", prompt: `A graceful ballerina dancing outside a circus tent on green grass, with colorful wildflowers swaying around her as she twirls and poses in the meadow.`, image_urls:[ "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-1.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-2.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-3.png" ], duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: hMDMSCevwslpo4mCYcAq6 Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'hMDMSCevwslpo4mCYcAq6', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/kangaroo/bs2/upload-ylab-stunt-sgp/muse/784256485483880450/VIDEO/20260120/4ce5531fa9b76040406160e4de9aade3-e3fecf53-c465-4fae-930d-ca6fb4f5fa35.mp4?cacheKey=ChtzZWN1cml0eS5rbGluZy5tZXRhX2VuY3J5cHQSsAH5QpP_T20xhfZgD0_rzkVWFOOMi7tUMuP4dUQ09Jq6rYmb93RbfnW-Rs49q3mqiGOqI81iK-XP8epTf1_MTdQXEanJ2bQXcRIwdsW0avYR5Jl8zYFmQ-1VosskLDw18lst4S3L1MlkAGz1J83RefPcPJXHghSQeXMAVm7XomoxHPe3r0oLhOK1_53_pzawb1dOv59rcGt2BIPNLqHrgnKwvWBKjNQgmJmXjd1GCLfHlBoSEWFMYg6sFzvDzdpgVuS6Rze6IiBbx-DP_va2vmNp6ZDdaZgVJejcfrBIg2OXoS-Q1Jy1YSgFMAE&x-kcdn-pid=112781&ksSecret=23cad3f0f321b18c1efcd147b62b61a8&ksTime=6996ec46'}} ``` {% endcode %}
**Processing time**: \~ 2 min 7 sec. **Generated video** (1920x1080, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-standard-text-to-video.md # v1.6-standard/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1.6/standard/text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced AI video generation model, designed to create high-quality videos from text prompts and images. This version introduces significant improvements in prompt adherence, visual quality, and dynamic action rendering, enabling users to generate more consistent and visually appealing results compared to its predecessor, Kling 1.5. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1.6/standard/text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"description":"The aspect ratio of the generated video."}},"required":["model","prompt"],"title":"kling-video/v1.6/standard/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1.6/standard/text-to-video", "prompt": "A cheerful white raccoon running through a sequoia forest", "aspect_ratio": "16:9", "duration": "5" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: 'kling-video/v1.6/standard/text-to-video', prompt: ` A cheerful white raccoon running through a sequoia forest. `, duration: 5, aspect_ratio: '16:9' }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: vn9XMO80UyV9BSq6K6G_O Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: completed Processing complete: {'id': 'vn9XMO80UyV9BSq6K6G_O', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a8b2511/HG_-uFLYZfjQJwW8bS3oF_output.mp4'}} ``` {% endcode %}
**Processing time**: \~ 3 min 6 sec. **Generated video** (1280x720, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-standart-image-to-video.md # v1.6-standard/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v1.6/standard/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced AI video generation model developed by Kuaishou Technology, designed to create high-quality videos from text prompts and images. This version introduces significant improvements in prompt adherence, visual quality, and dynamic action rendering, enabling users to generate more consistent and visually appealing results compared to its predecessor, Kling 1.5. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`. \ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server {% hint style="info" %} The aspect ratio of the generated video is solely determined by the aspect ratio of the input reference image. {% endhint %} ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v1.6/standard/image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."}},"required":["model","image_url","prompt"],"title":"kling-video/v1.6/standard/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v1.6/standard/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "kling-video/v1.6/standard/image-to-video", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: vIvy_TKgZ0vwl7ZYG6Hy0 Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'vIvy_TKgZ0vwl7ZYG6Hy0', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a8b24f5/tyvgkOz63LLHeMJdbyKAq_output.mp4'}} ``` {% endcode %}
**Processing time**: \~ 2 min 2 sec. **Generated video** (1280x720, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v2-master-image-to-video.md # v2-master/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/v2-master-image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Compared to [v1.6](https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-image-to-video), this Kling model better aligns with the prompt and delivers more dynamic and visually appealing results. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/v2-master-image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"static_mask":{"type":"string","format":"uri","description":"URL of the image for Static Brush Application Area (Mask image created by users using the motion brush)."},"dynamic_masks":{"type":"array","items":{"type":"object","properties":{"mask":{"type":"string"},"trajectories":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer"},"y":{"type":"integer"}},"required":["x","y"]},"minItems":2,"maxItems":77}},"required":["mask","trajectories"]},"maxItems":6,"description":"List of dynamic masks."},"camera_control":{"type":"object","properties":{"type":{"type":"string","enum":["simple","down_back","forward_up","right_turn_forward","left_turn_forward"]},"config":{"type":"object","properties":{"horizontal":{"type":"number","minimum":-10,"maximum":10},"vertical":{"type":"number","minimum":-10,"maximum":10},"pan":{"type":"number","minimum":-10,"maximum":10},"tilt":{"type":"number","minimum":-10,"maximum":10},"roll":{"type":"number","minimum":-10,"maximum":10},"zoom":{"type":"number","minimum":-10,"maximum":10}}}},"description":"Camera control parameters."}},"required":["model","image_url"],"title":"klingai/v2-master-image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} This model produces highly detailed and natural-looking videos, so generation may take around 5–6 minutes for a 5-second video and 11-14 minutes for a 10-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time base_url = "https://api.aimlapi.com/v2" api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/v2-master-image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Gen_ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: 88a64d31-cce4-41e8-b9d8-5392e8c2a6d4:kling-video/v2/master/image-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '88a64d31-cce4-41e8-b9d8-5392e8c2a6d4:kling-video/v2/master/image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/kangaroo/iKWC3BJOdav7Cy8cWSO9k_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 5150376}} ``` {% endcode %}
Generated Video **Original**: [784x1172](https://drive.google.com/file/d/1CtkWU6zvsTZj5O82dWnp42tcfRWSDWyH/view?usp=sharing) **Low-res GIF preview**:

"prompt": "Mona Lisa puts on glasses with her hands."

--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v2-master-text-to-video.md # v2-master/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/v2-master-image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Compared to [v1.6](https://docs.aimlapi.com/api-references/video-models/kling-ai/v1.6-pro-text-to-video), this Kling model better aligns with the prompt and delivers more dynamic and visually appealing results. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/v2-master-text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"camera_control":{"type":"string","enum":["down_back","forward_up","right_turn_forward","left_turn_forward"],"description":"Camera control parameters."},"advanced_camera_control":{"type":"object","properties":{"movement_type":{"type":"string","enum":["horizontal","vertical","pan","tilt","roll","zoom"],"description":"The type of camera movement."},"movement_value":{"type":"integer","minimum":-10,"maximum":10,"description":"The value of the camera movement."}},"required":["movement_type","movement_value"],"description":"Advanced camera control parameters."}},"required":["model","prompt"],"title":"klingai/v2-master-text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} This model produces highly detailed and natural-looking videos, so generation may take around 5–6 minutes for a 5-second video and 11-14 minutes for a 10-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time base_url = "https://api.aimlapi.com/v2" api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/v2-master-text-to-video", "prompt": "A cheerful white raccoon running through a sequoia forest", "aspect_ratio": "16:9", "duration": "5", "cfg_scale": 0.9 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Gen_ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: 10c09c56-2e00-4a64-89ec-358ff71f8144:kling-video/v2/master/text-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '10c09c56-2e00-4a64-89ec-358ff71f8144:kling-video/v2/master/text-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/lion/0jOQ9V3lSX-nz16Xu6BMV_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 11664920}} ``` {% endcode %}
**Original**: [1280x720](https://drive.google.com/file/d/1kGC9QJcypu6Qzjred1Bo1YpnuTx25yNV/view?usp=sharing) **Low-res GIF preview**:

"prompt": "A cheerful white raccoon running through a sequoia forest"

--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v2.1-master-image-to-video.md # v2.1-master/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/v2.1-master-image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A image-to-video generation model with impressive motion fluidity, cinematic visuals, and exceptional prompt precision. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/v2.1-master-image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"static_mask":{"type":"string","format":"uri","description":"URL of the image for Static Brush Application Area (Mask image created by users using the motion brush)."},"dynamic_masks":{"type":"array","items":{"type":"object","properties":{"mask":{"type":"string"},"trajectories":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer"},"y":{"type":"integer"}},"required":["x","y"]},"minItems":2,"maxItems":77}},"required":["mask","trajectories"]},"maxItems":6,"description":"List of dynamic masks."},"camera_control":{"type":"object","properties":{"type":{"type":"string","enum":["simple","down_back","forward_up","right_turn_forward","left_turn_forward"]},"config":{"type":"object","properties":{"horizontal":{"type":"number","minimum":-10,"maximum":10},"vertical":{"type":"number","minimum":-10,"maximum":10},"pan":{"type":"number","minimum":-10,"maximum":10},"tilt":{"type":"number","minimum":-10,"maximum":10},"roll":{"type":"number","minimum":-10,"maximum":10},"zoom":{"type":"number","minimum":-10,"maximum":10}}}},"description":"Camera control parameters."}},"required":["model","image_url"],"title":"klingai/v2.1-master-image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} This model produces highly detailed and natural-looking videos, so generation may take around 5–6 minutes for a 5-second video and 11-14 minutes for a 10-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time base_url = "https://api.aimlapi.com/v2" api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/v2.1-master-image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: b906d239-565b-4012-9234-246189283143:kling-video/v2.1/master/image-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'b906d239-565b-4012-9234-246189283143:kling-video/v2.1/master/image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/penguin/Rbq5R5eMb_nH5CEMlLBDz_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 16249652}} ``` {% endcode %}
**Original**: [1180x1756](https://drive.google.com/file/d/1EJButNYU2ntS-tr7HuLXI2D17Zw3zg6_/view?usp=sharing) **Low-res GIF preview**:

"prompt": "Mona Lisa puts on glasses with her hands."

--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v2.1-master-text-to-video.md # v2.1-master/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/v2.1-master-text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A text-to-video generation model with impressive motion fluidity, cinematic visuals, and exceptional prompt precision. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/v2.1-master-text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10]},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"camera_control":{"type":"string","enum":["down_back","forward_up","right_turn_forward","left_turn_forward"],"description":"Camera control parameters."},"advanced_camera_control":{"type":"object","properties":{"movement_type":{"type":"string","enum":["horizontal","vertical","pan","tilt","roll","zoom"],"description":"The type of camera movement."},"movement_value":{"type":"integer","minimum":-10,"maximum":10,"description":"The value of the camera movement."}},"required":["movement_type","movement_value"],"description":"Advanced camera control parameters."}},"required":["model","prompt"],"title":"klingai/v2.1-master-text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} This model produces highly detailed and natural-looking videos, so generation may take around 5–6 minutes for a 5-second video and 11-14 minutes for a 10-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "klingai/v2.1-master-text-to-video", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: ce81dc29-0fb7-4dc9-b412-355933b1b9cf:kling-video/v2.1/master/text-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'ce81dc29-0fb7-4dc9-b412-355933b1b9cf:kling-video/v2.1/master/text-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/elephant/GOzkGbKnKFhs4uzkbR99Z_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 15676617}} ``` {% endcode %}
**Original**: [1920x1080](https://drive.google.com/file/d/1ddfusnDAdJ3Fc5bnDuZmCI8PBQpnU3ZQ/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain, then
rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming."

More results

"A cheerful white raccoon running through a sequoia forest"

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v2.1-pro-image-to-video.md # v2.1-pro/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v2.1/pro/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A more advanced Kling 2.1 variant designed for professional video production, featuring high visual fidelity, smooth camera work, and detailed motion control, suited for cinematic storytelling. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v2.1/pro/image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"static_mask":{"type":"string","format":"uri","description":"URL of the image for Static Brush Application Area (Mask image created by users using the motion brush)."},"dynamic_masks":{"type":"array","items":{"type":"object","properties":{"mask":{"type":"string"},"trajectories":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer"},"y":{"type":"integer"}},"required":["x","y"]},"minItems":2,"maxItems":77}},"required":["mask","trajectories"]},"maxItems":6,"description":"List of dynamic masks."},"camera_control":{"type":"object","properties":{"type":{"type":"string","enum":["simple","down_back","forward_up","right_turn_forward","left_turn_forward"]},"config":{"type":"object","properties":{"horizontal":{"type":"number","minimum":-10,"maximum":10},"vertical":{"type":"number","minimum":-10,"maximum":10},"pan":{"type":"number","minimum":-10,"maximum":10},"tilt":{"type":"number","minimum":-10,"maximum":10},"roll":{"type":"number","minimum":-10,"maximum":10},"zoom":{"type":"number","minimum":-10,"maximum":10}}}},"description":"Camera control parameters."}},"required":["model","image_url"],"title":"kling-video/v2.1/pro/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation may take around 3–4 minutes for a 5-second video and 6-8 minutes for a 10-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v2.1/pro/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 789392067490349060 Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '789392067490349060', 'status': 'completed', 'video': {'url': 'https://v15-kling.klingai.com/bs2/upload-ylab-stunt-sgp/se/stream_lake_m2v_img2video_v21_master/a602a9c7-30de-42e4-957e-865872f33f7c_raw_video.mp4?x-kcdn-pid=112372'}} ``` {% endcode %}
Generated Video **Original**: [1180x1756](https://drive.google.com/file/d/1YRqc3X8bpoN_KvtIH2WwGZAg2HeDQvjB/view?usp=sharing) **Low-res GIF preview**:

"prompt": "Mona Lisa puts on glasses with her hands."

--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v2.1-standard-image-to-video.md # v2.1-standard/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `kling-video/v2.1/standard/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A cost-efficient variant of the Kling 2.1 model, yet supports 1080p. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["kling-video/v2.1/standard/image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"static_mask":{"type":"string","format":"uri","description":"URL of the image for Static Brush Application Area (Mask image created by users using the motion brush)."},"dynamic_masks":{"type":"array","items":{"type":"object","properties":{"mask":{"type":"string"},"trajectories":{"type":"array","items":{"type":"object","properties":{"x":{"type":"integer"},"y":{"type":"integer"}},"required":["x","y"]},"minItems":2,"maxItems":77}},"required":["mask","trajectories"]},"maxItems":6,"description":"List of dynamic masks."},"camera_control":{"type":"object","properties":{"type":{"type":"string","enum":["simple","down_back","forward_up","right_turn_forward","left_turn_forward"]},"config":{"type":"object","properties":{"horizontal":{"type":"number","minimum":-10,"maximum":10},"vertical":{"type":"number","minimum":-10,"maximum":10},"pan":{"type":"number","minimum":-10,"maximum":10},"tilt":{"type":"number","minimum":-10,"maximum":10},"roll":{"type":"number","minimum":-10,"maximum":10},"zoom":{"type":"number","minimum":-10,"maximum":10}}}},"description":"Camera control parameters."}},"required":["model","image_url"],"title":"kling-video/v2.1/standard/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "kling-video/v2.1/standard/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 789339022468063304 Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '789339022468063304', 'status': 'completed', 'video': {'url': 'https://v15-kling.klingai.com/bs2/upload-ylab-stunt-sgp/se/stream_lake_m2v_img2video_v21_master/56b24375-28fc-4569-a483-38d44fd8d4f1_raw_video.mp4?x-kcdn-pid=112372'}} ``` {% endcode %}
Generated Video **Original**: [1180x1756](https://drive.google.com/file/d/18O8DdqFSNr845sH_VJT9jQOoDrEJS2xc/view?usp=sharing) **Low-res GIF preview**:

"prompt": "Mona Lisa puts on glasses with her hands."

--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v2.5-turbo-pro-image-to-video.md # v2.5-turbo/pro/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/v2.5-turbo/pro/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} KlingAI’s most advanced image-to-video model as of September 2025. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/v2.5-turbo/pro/image-to-video"]},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."}},"required":["model","image_url","prompt"],"title":"klingai/v2.5-turbo/pro/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/v2.5-turbo/pro/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 935ed026-34da-4357-bb8d-d7f444a2393b:klingai/v2.5-turbo/pro/image-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '935ed026-34da-4357-bb8d-d7f444a2393b:klingai/v2.5-turbo/pro/image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/koala/Wr1zMMgriXmIeQh0GGjZV_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 17744261}} ``` {% endcode %}
Generated Video **Original**: [1180x1756](https://drive.google.com/file/d/1UYHWEX_Rghb4NzQ2umIsoj-Vuhs18JNv/view?usp=sharing) **Low-res GIF preview**:

"prompt": "Mona Lisa puts on glasses with her hands."

--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/v2.5-turbo-pro-text-to-video.md # v2.5-turbo/pro/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/v2.5-turbo/pro/text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} KlingAI’s most advanced text-to-video model as of September 2025. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/v2.5-turbo/pro/text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"description":"The aspect ratio of the generated video."}},"required":["model","prompt"],"title":"klingai/v2.5-turbo/pro/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/v2.5-turbo/pro/text-to-video", "prompt": "A cheerful white raccoon running through a sequoia forest", "aspect_ratio": "16:9", "duration": "5", "cfg_scale": 0.9 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: a248abc5-3122-45a9-a9ee-352e4642e01c:klingai/v2.5-turbo/pro/text-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'a248abc5-3122-45a9-a9ee-352e4642e01c:klingai/v2.5-turbo/pro/text-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/lion/l7XgndUauE6MRszEJjNSm_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 20235536}} ``` {% endcode %}
**Original**: [1920x1080](https://drive.google.com/file/d/1jAhXF-NO9W3IncmivJXlTHOaI6c2BAqT/view?usp=sharing) **Low-res GIF preview**:

"prompt": "A cheerful white raccoon running through a sequoia forest"

--- # Source: https://docs.aimlapi.com/api-references/speech-models/voice-chat/elevenlabs/v3_alpha.md # v3\_alpha {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `elevenlabs/v3_alpha` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model supports a wide range of output formats and quality levels, text normalization, and over 70 languages. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure that key is enabled on UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find [a code example](#quick-code-example) that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key from your account.\ :black\_small\_square: Provide your instructions via the `text` parameter and set the model voice in the `voice` parameter. :digit\_four: **(Optional)**** Adjust other optional parameters if needed** Only `text` and `voice` are required parameters for this model (and we’ve already filled them in for you in the example), but you can include optional parameters if needed to adjust the model’s behavior. Below, you can find the corresponding [API schema](#api-schemas), which lists all available parameters along with notes on how to use them. :digit\_five: **Run your modified code** Run your modified code in your development environment. Response time depends on various factors, but for simple prompts it rarely exceeds 5 seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step by step — feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). {% endhint %}
## Quick Code Example Here is an example of generating an audio response to the user input provided in the `text` parameter. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import request def main(): url = "https://api.aimlapi.com/v1/tts" headers = { # Insert your AIML API Key instead of : "Authorization": "Bearer ", } payload = { "model": "elevenlabs/v3_alpha", "text": "Hi! What are you doing today?", "voice": "Alice" } response = requests.post(url, headers=headers, json=payload, stream=True) dist = os.path.abspath("audio.wav") with open(dist, "wb") as write_stream: for chunk in response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", dist) main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const fs = require('fs'); const path = require('path'); const url = 'https://api.aimlapi.com/v1/tts'; async function main() { const response = await fetch(url, { method: 'POST', headers: { 'Authorization': 'Bearer ', 'Content-Type': 'application/json' }, body: JSON.stringify({ model: 'elevenlabs/v3_alpha', text: 'Hi! What are you doing today?', voice: 'Alice' }) }); const dist = path.resolve(__dirname, 'audio.wav'); // Path to save audio const fileStream = fs.createWriteStream(dist); // Write audio stream to file const reader = response.body.getReader(); while (true) { const { done, value } = await reader.read(); if (done) break; fileStream.write(value); } fileStream.end(); console.log('Audio saved to:', dist); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Audio saved to: c:\Users\user\Documents\Python Scripts\AUDIOs\audio.wav ``` {% endcode %}
Listen to the audio response: {% embed url="" %} ## API Schemas ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["elevenlabs/v3_alpha"]},"text":{"type":"string","description":"The text content to be converted to speech."},"voice":{"type":"string","enum":["Rachel","Drew","Clyde","Paul","Aria","Domi","Dave","Roger","Fin","Sarah","Antoni","Laura","Thomas","Charlie","George","Emily","Elli","Callum","Patrick","River","Harry","Liam","Dorothy","Josh","Arnold","Charlotte","Alice","Matilda","James","Joseph","Will","Jeremy","Jessica","Eric","Michael","Ethan","Chris","Gigi","Freya","Santa Claus","Brian","Grace","Daniel","Lily","Serena","Adam","Nicole","Bill","Jessie","Sam","Glinda","Giovanni","Mimi"],"default":"Rachel","description":"Name of the voice to be used."},"apply_text_normalization":{"type":"string","enum":["auto","on","off"],"description":"This parameter controls text normalization with three modes: 'auto', 'on', and 'off'. When set to 'auto', the system will automatically decide whether to apply text normalization (e.g., spelling out numbers). With 'on', text normalization will always be applied, while with 'off', it will be skipped."},"output_format":{"type":"string","enum":["mp3_22050_32","mp3_44100_32","mp3_44100_64","mp3_44100_96","mp3_44100_128","mp3_44100_192","pcm_8000","pcm_16000","pcm_22050","pcm_24000","pcm_44100","pcm_48000","ulaw_8000","alaw_8000","opus_48000_32","opus_48000_64","opus_48000_96","opus_48000_128","opus_48000_192"],"description":"Format of the output content for non-streaming requests. Controls how the generated audio data is encoded in the response."},"voice_settings":{"type":"object","properties":{"stability":{"type":"number","description":"Determines how stable the voice is and the randomness between each generation. Lower values introduce broader emotional range for the voice. Higher values can result in a monotonous voice with limited emotion."},"use_speaker_boost":{"type":"boolean","description":"This setting boosts the similarity to the original speaker. Using this setting requires a slightly higher computational load, which in turn increases latency."},"similarity_boost":{"type":"number","description":"Determines how closely the AI should adhere to the original voice when attempting to replicate it."},"style":{"type":"number","description":"Determines the style exaggeration of the voice. This setting attempts to amplify the style of the original speaker. It does consume additional computational resources and might increase latency if set to anything other than 0."},"speed":{"type":"number","description":"Adjusts the speed of the voice. A value of 1.0 is the default speed, while values less than 1.0 slow down the speech, and values greater than 1.0 speed it up."}},"description":"Voice settings overriding stored settings for the given voice. They are applied only on the given request."},"seed":{"type":"integer","description":"If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed."}},"required":["model","text"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/video-models/pixverse/v5-5-image-to-video.md # v5.5/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `pixverse/v5-5-image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates high-quality video clips from text combined with an image, delivering smooth motion and sharp visual detail. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["pixverse/v5.5/image-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"URL of the image to be used as the first frame of the video."},"resolution":{"type":"string","enum":["360p","540p","720p","1080p"],"default":"720p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The output video length in seconds. The 1080p quality option does not support 8-second videos.","enum":[5,8,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"style":{"type":"string","enum":["anime","3d_animation","clay","comic","cyberpunk"],"description":"The style of the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"generate_audio_switch":{"type":"boolean","default":false,"description":"Enable audio generation.\n- true: Audio on.\n- false: Audio off."},"generate_multi_clip_switch":{"type":"boolean","default":false,"description":"Enable multi-clip generation with dynamic camera changes.\n- true: Multi-clip. \n- false: Single-clip."},"thinking_type":{"type":"string","enum":["enabled","disabled","auto"],"default":"enabled","description":"Prompt reasoning enhancement mode. \n- \"enabled\": Turn on prompt optimization. \n- \"disabled\": Turn off prompt optimization. \n- \"auto\" or omitted: Let the model decide automatically."}},"required":["model","prompt","image_url"],"title":"pixverse/v5.5/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/v5-5-image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "pixverse/v5-5-image-to-video", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: 5, }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: jCajo_YQuMr5As6lN1lSg Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'jCajo_YQuMr5As6lN1lSg', 'status': 'succeeded', 'video': {'url': 'https://cdn.aimlapi.com/panda/pixverse%2Fmp4%2Fmedia%2Fweb%2Fori%2FtFzvIwK3x79Lvz8cknMvj_seed2144515801.mp4'}} ``` {% endcode %}
**Processing time**: \~50 s. **Original**: [864x1280](https://drive.google.com/file/d/1Bn6g08TSUixk_Zc3e2BQyguljle_B7Iq/view?usp=sharing) **Low-res GIF preview**:

"Mona Lisa puts on glasses with her hands."

--- # Source: https://docs.aimlapi.com/api-references/video-models/pixverse/v5-5-text-to-video.md # v5.5/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `pixverse/v5-5-text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model provides faster text-to-video rendering with consistently sharp, realistic, and cinematic-quality results. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["pixverse/v5.5/text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","4:3","1:1","3:4","9:16"],"default":"16:9","description":"The aspect ratio of the generated video."},"resolution":{"type":"string","enum":["360p","540p","720p","1080p"],"default":"720p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The output video length in seconds. The 1080p quality option does not support 8-second videos.","enum":[5,8,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"style":{"type":"string","enum":["anime","3d_animation","clay","comic","cyberpunk"],"description":"The style of the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"generate_audio_switch":{"type":"boolean","default":false,"description":"Enable audio generation.\n- true: Audio on.\n- false: Audio off."},"generate_multi_clip_switch":{"type":"boolean","default":false,"description":"Enable multi-clip generation with dynamic camera changes.\n- true: Multi-clip. \n- false: Single-clip."},"thinking_type":{"type":"string","enum":["enabled","disabled","auto"],"default":"enabled","description":"Prompt reasoning enhancement mode. \n- \"enabled\": Turn on prompt optimization. \n- \"disabled\": Turn off prompt optimization. \n- \"auto\" or omitted: Let the model decide automatically."}},"required":["model","prompt"],"title":"pixverse/v5.5/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/v5-5-text-to-video", "prompt": "A cheerful white raccoon running through a sequoia forest", "aspect_ratio": "16:9", "duration": "5", "resolution": "1080p" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: 'pixverse/v5-5-text-to-video', prompt: ` A cheerful white raccoon running through a sequoia forest. `, duration: 5, aspect_ratio: '16:9', resolution: '1080p' }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'FxkOMGP_IRjjNHfzH3LTV', 'status': 'queued', 'meta': {'usage': {'credits_used': 5000000}}} Generation ID: FxkOMGP_IRjjNHfzH3LTV Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'FxkOMGP_IRjjNHfzH3LTV', 'status': 'succeeded', 'video': {'url': 'https://cdn.aimlapi.com/panda/pixverse%2Fmp4%2Fmedia%2Fweb%2Fori%2FXxfCXAeT4Mr3QY0RVb564_seed1231972948.mp4'}} ``` {% endcode %}
**Processing time**: \~ 2 min 3 sec. **Generated video** (1920x1080, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/pixverse/v5-image-to-video.md # v5/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `pixverse/v5/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model provides faster image-to-video rendering with consistently sharp, realistic, and cinematic-quality results. This model also generates videos with synchronized audio. For lip-sync input, you may supply text with a predefined voice. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt. This endpoint creates and sends a video generation task to the server — and returns a generation ID. For lip-sync input, you may supply text (`lip_sync_tts_content`) with a predefined voice (`lip_sync_tts_speaker`). ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["pixverse/v5/image-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"URL of the image to be used as the first frame of the video."},"resolution":{"type":"string","enum":["360p","540p","720p","1080p"],"default":"720p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The output video length in seconds. The 1080p quality option does not support 8-second videos.","enum":[5,8],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"style":{"type":"string","enum":["anime","3d_animation","clay","comic","cyberpunk"],"description":"The style of the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"lip_sync_tts_content":{"type":"string","description":"The text content to be lip-synced in the video."},"lip_sync_tts_speaker":{"type":"string","enum":["Harper","Ava","Isabella","Sophia","Emily","Chloe","Julia","Mason","Jack","Liam","James","Oliver","Adrian","Ethan","Auto"],"description":"A predefined system voice used for generating speech in the video."}},"required":["model","prompt","image_url"],"title":"pixverse/v5/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/pixverse/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/v5/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": 5 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/pixverse/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "pixverse/v5/image-to-video", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: 5, }); const url = new URL(`${baseUrl}/generate/video/pixverse/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/pixverse/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '8ac142d3-7c9f-4071-bdc6-d0f2d3d9b327:pixverse/v5/image-to-video', 'status': 'queued', 'meta': {'usage': {'tokens_used': 420000}}} Generation ID: 8ac142d3-7c9f-4071-bdc6-d0f2d3d9b327:pixverse/v5/image-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '8ac142d3-7c9f-4071-bdc6-d0f2d3d9b327:pixverse/v5/image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/elephant/uCLDKRtL_AeOrRAwiR8UH_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 4259218}} ``` {% endcode %}
**Processing time**: \~1.5 min. **Original**: [864x1280](https://drive.google.com/file/d/1kld9uy5nb-R_9D0JrbWLFhE3z171WHTw/view?usp=sharing) **Low-res GIF preview**:

"Mona Lisa puts on glasses with her hands."

## Full Example #2: Lip-Sync Now let’s test the parameters related to the lip-sync feature. We’ll generate a video with some character and give them a piece of text to speak. The text goes into the `lip_sync_tts_content` parameter, and the `lip_sync_tts_speaker` parameter selects one of the predefined voices. The code below, just like in the first example, creates a video generation task and then automatically polls the server every 15 seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/v5/image-to-video", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/news-presenter.jpg", "prompt": "A young news presenter standing in the studio, facing the camera directly, eyes always on the camera, calm and professional, very still posture, minimal head movement, no sudden gestures, with a gentle friendly smile, confident stance, studio lighting, broadcast framing, realistic style, neutral background activity.", "lip_sync_tts_content": "Hello and welcome. This is our latest news update, and here are the headlines.", "lip_sync_tts_speaker": "Chloe" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ''; // Creating and sending a video generation task to the server async function generateVideo() { const url = 'https://api.aimlapi.com/v2/video/generations'; const data = { model: 'pixverse/v5/image-to-video', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/news-presenter.jpg', prompt: 'A young news presenter standing in the studio, facing the camera directly, eyes always on the camera, calm and professional, very still posture, minimal head movement, no sudden gestures, with a gentle friendly smile, confident stance, studio lighting, broadcast framing, realistic style, neutral background activity.', lip_sync_tts_content: 'Hello and welcome. This is our latest news update, and here are the headlines.', lip_sync_tts_speaker: 'Chloe' }; try { const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error('Request failed:', error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL('https://api.aimlapi.com/v2/video/generations'); url.searchParams.append('generation_id', genId); try { const response = await fetch(url, { method: 'GET', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, }); return await response.json(); } catch (error) { console.error('Error fetching video:', error); return null; } } // Initiates video generation and checks the status every 15 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = async () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); await new Promise(resolve => setTimeout(resolve, interval)); return checkStatus(); } else { console.log("Processing complete:\n", responseData); } }; await checkStatus(); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Statuses
StatusDescription
queuedJob is waiting in queue
generatingVideo is being generated
completedGeneration successful, video available
errorGeneration failed, check error field
Response {% code overflow="wrap" %} ```json5 {'id': '3yFHGAkECD5RPnpL11mHe', 'status': 'queued', 'meta': {'usage': {'credits_used': 2000000}}} Generation ID: 3yFHGAkECD5RPnpL11mHe Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': '3yFHGAkECD5RPnpL11mHe', 'status': 'succeeded', 'video': {'url': 'https://cdn.aimlapi.com/panda/pixverse%2Fmp4%2Fmedia%2Fweb%2Fori%2FJVT-OZSEbeCvZ2IKlQK6p_seed1592035041.mp4'}} ``` {% endcode %}
**Processing time**: \~1 min 2 sec. **Generated video** (1280x720, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/pixverse/v5-text-to-video.md # v5/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `pixverse/v5/text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model provides faster text-to-video rendering with consistently sharp, realistic, and cinematic-quality results. This model also generates videos with synchronized audio. For lip-sync input, you may supply text with a predefined voice. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID.\ For lip-sync input, you may supply text (`lip_sync_tts_content`) with a predefined voice (`lip_sync_tts_speaker`). ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["pixverse/v5/text-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","4:3","1:1","3:4","9:16"],"default":"16:9","description":"The aspect ratio of the generated video."},"resolution":{"type":"string","enum":["360p","540p","720p","1080p"],"default":"720p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The output video length in seconds. The 1080p quality option does not support 8-second videos.","enum":[5,8],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"style":{"type":"string","enum":["anime","3d_animation","clay","comic","cyberpunk"],"description":"The style of the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"lip_sync_tts_content":{"type":"string","description":"The text content to be lip-synced in the video."},"lip_sync_tts_speaker":{"type":"string","enum":["Harper","Ava","Isabella","Sophia","Emily","Chloe","Julia","Mason","Jack","Liam","James","Oliver","Adrian","Ethan","Auto"],"description":"A predefined system voice used for generating speech in the video."}},"required":["model","prompt"],"title":"pixverse/v5/text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `generation_id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation takes about 30–40 seconds for a 5-second 720p video and around 1 minute 15 seconds for 1080p. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/generate/video/pixverse/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/v5/text-to-video", "prompt": "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", "resolution": "1080p", "duration": 5 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/generate/video/pixverse/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Generate video gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "pixverse/v5/text-to-video", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. `, resolution: '1080p', duration: 5, }); const url = new URL(`${baseUrl}/generate/video/pixverse/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/pixverse/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '1fe4344e-3d44-4bf8-9f04-0ac4bb312eec:pixverse/v5/text-to-video', 'status': 'queued', 'meta': {'usage': {'tokens_used': 840000}}} Generation ID: 1fe4344e-3d44-4bf8-9f04-0ac4bb312eec:pixverse/v5/text-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '1fe4344e-3d44-4bf8-9f04-0ac4bb312eec:pixverse/v5/text-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/penguin/xK3kbIC5S0pR_oEU4Uw1Q_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 6274330}} ``` {% endcode %}
**Processing time**: \~1 min 14 sec. **Original**: [1920x1080](https://drive.google.com/file/d/1njsbseldEzKC6Ja7-CpOiY9jkk-LAPl7/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain, then rushes
toward the camera with its jaws open, revealing massive fangs. We see it's coming."

## Full Example #2: Lip-Sync Now let’s test the parameters related to the lip-sync feature. We’ll generate a video with some character and give them a piece of text to speak. The text goes into the `lip_sync_tts_content` parameter, and the `lip_sync_tts_speaker` parameter selects one of the predefined voices. The code below, just like in the first example, creates a video generation task and then automatically polls the server every 15 seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/v5/text-to-video", "prompt": "A young blond-haired female news presenter standing in a modern TV news studio, facing the camera directly, eyes on the camera, calm and professional, minimal movement, with a gentle friendly smile, confident posture, studio lighting, broadcast framing, realistic style, neutral background activity.", "lip_sync_tts_content": "Hello and welcome. This is our latest news update, and here are the headlines.", "lip_sync_tts_speaker": "Ava" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ''; // Creating and sending a video generation task to the server async function generateVideo() { const url = 'https://api.aimlapi.com/v2/video/generations'; const data = { model: 'pixverse/v5/text-to-video', prompt: 'A young blond-haired female news presenter standing in a modern TV news studio, facing the camera directly, eyes on the camera, calm and professional, minimal movement, with a gentle friendly smile, confident posture, studio lighting, broadcast framing, realistic style, neutral background activity.', lip_sync_tts_content: 'Hello and welcome. This is our latest news update, and here are the headlines.', lip_sync_tts_speaker: 'Ava' }; try { const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error('Request failed:', error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL('https://api.aimlapi.com/v2/video/generations'); url.searchParams.append('generation_id', genId); try { const response = await fetch(url, { method: 'GET', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, }); return await response.json(); } catch (error) { console.error('Error fetching video:', error); return null; } } // Initiates video generation and checks the status every 15 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = async () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); await new Promise(resolve => setTimeout(resolve, interval)); return checkStatus(); } else { console.log("Processing complete:\n", responseData); } }; await checkStatus(); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Statuses
StatusDescription
queuedJob is waiting in queue
generatingVideo is being generated
completedGeneration successful, video available
errorGeneration failed, check error field
Response {% code overflow="wrap" %} ```json5 {'id': 'Zx3z_NSUkI67m3sHg-rUq', 'status': 'queued', 'meta': {'usage': {'credits_used': 2000000}}} Generation ID: Zx3z_NSUkI67m3sHg-rUq Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'Zx3z_NSUkI67m3sHg-rUq', 'status': 'succeeded', 'video': {'url': 'https://cdn.aimlapi.com/panda/pixverse%2Fmp4%2Fmedia%2Fweb%2Fori%2FtKPwdgHZmANBxqWuFuWYH_seed1123949342.mp4'}} ``` {% endcode %}
**Processing time**: \~1 min 17 sec. **Generated video** (1280x720, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/pixverse/v5-transition.md # v5/transition {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `pixverse/v5/transition` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} By specifying the first and last video frames as input, this image-to-video model is well suited for generating video scenes that can later be seamlessly edited into a complete clip in a video editor. Consistently sharp, realistic, and cinematic-quality results.\ This model also generates videos with synchronized audio. For lip-sync input, you may supply text with a predefined voice. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a reference image and a prompt. This endpoint creates and sends a video generation task to the server — and returns a generation ID. For lip-sync input, you may supply text (`lip_sync_tts_content`) with a predefined voice (`lip_sync_tts_speaker`). ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["pixverse/v5/transition"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"URL of the image to be used as the first frame of the video."},"tail_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"resolution":{"type":"string","enum":["360p","540p","720p","1080p"],"default":"720p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The output video length in seconds. The 1080p quality option does not support 8-second videos.","enum":[5,8],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"style":{"type":"string","enum":["anime","3d_animation","clay","comic","cyberpunk"],"description":"The style of the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"lip_sync_tts_content":{"type":"string","description":"The text content to be lip-synced in the video."},"lip_sync_tts_speaker":{"type":"string","enum":["Harper","Ava","Isabella","Sophia","Emily","Chloe","Julia","Mason","Jack","Liam","James","Oliver","Adrian","Ethan","Auto"],"description":"A predefined system voice used for generating speech in the video."}},"required":["model","prompt","image_url","tail_image_url"],"title":"pixverse/v5/transition"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. In this example, we set the last frame to match the first, creating a video that can be played in a seamless loop. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/pixverse/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/v5/transition", "prompt": "Mona Lisa sits holding glasses in her hands, then puts them on, but changes her mind, takes them off, and hides them in a handbag on her lap. Then she smiles.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "tail_image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": 8 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/pixverse/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "pixverse/v5/transition", prompt: "Mona Lisa sits holding glasses in her hands, then puts them on, but changes her mind, takes them off, and hides them in a handbag on her lap. Then she smiles.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", tail_image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: 8, }); const url = new URL(`${baseUrl}/generate/video/pixverse/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/pixverse/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '91ec52a3-9d26-4950-8346-f64334dc6554:pixverse/v5/transition', 'status': 'queued', 'meta': {'usage': {'tokens_used': 672000}}} Generation ID: 91ec52a3-9d26-4950-8346-f64334dc6554:pixverse/v5/transition Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '91ec52a3-9d26-4950-8346-f64334dc6554:pixverse/v5/transition', 'status': 'completed', 'video': {'url': 'https://v3b.fal.media/files/b/penguin/zzrut-L6TMSVpD5ryPGeB_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 4955795}} ``` {% endcode %}
**Processing time**: \~1.5 min. **Original**: [864x1280](https://drive.google.com/file/d/1nqDmNR4N7JuzfoF7ZZxy-E5eQDPW7WYX/view?usp=sharing) **Low-res GIF preview**:

"Mona Lisa sits holding glasses in her hands, then puts them on,
but changes her mind, takes them off, and hides them in a handbag on her lap.
Then she smiles."

## Full Example #2: Lip-Sync Now let’s test the parameters related to the lip-sync feature. We’ll generate a video with some character and give them a piece of text to speak. The text goes into the `lip_sync_tts_content` parameter, and the `lip_sync_tts_speaker` parameter selects one of the predefined voices. The code below, just like in the first example, creates a video generation task and then automatically polls the server every 15 seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AI/ML API key instead of : api_key = "" # Creating and sending a video generation task to the server def generate_video(): url = "https://api.aimlapi.com/v2/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "pixverse/v5/image-to-video", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/news-presenter.jpg", "tail_image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/news-presenter.jpg", "prompt": "A young news presenter standing in the studio, facing the camera directly, eyes always on the camera, calm and professional, very still posture, minimal head movement, no sudden gestures, with a gentle friendly smile, confident stance, studio lighting, broadcast framing, realistic style, neutral background activity.", "lip_sync_tts_content": "Hello and welcome. This is our latest news update, and here are the headlines.", "lip_sync_tts_speaker": "Chloe", "duration": 5 } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = "https://api.aimlapi.com/v2/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ''; // Creating and sending a video generation task to the server async function generateVideo() { const url = 'https://api.aimlapi.com/v2/video/generations'; const data = { model: 'pixverse/v5/image-to-video', image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/news-presenter.jpg', tail_image_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/news-presenter.jpg', prompt: 'A young news presenter standing in the studio, facing the camera directly, eyes always on the camera, calm and professional, very still posture, minimal head movement, no sudden gestures, with a gentle friendly smile, confident stance, studio lighting, broadcast framing, realistic style, neutral background activity.', lip_sync_tts_content: 'Hello and welcome. This is our latest news update, and here are the headlines.', lip_sync_tts_speaker: 'Chloe', duration: 5 }; try { const response = await fetch(url, { method: 'POST', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error('Request failed:', error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL('https://api.aimlapi.com/v2/video/generations'); url.searchParams.append('generation_id', genId); try { const response = await fetch(url, { method: 'GET', headers: { Authorization: `Bearer ${apiKey}`, 'Content-Type': 'application/json', }, }); return await response.json(); } catch (error) { console.error('Error fetching video:', error); return null; } } // Initiates video generation and checks the status every 15 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = async () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); await new Promise(resolve => setTimeout(resolve, interval)); return checkStatus(); } else { console.log("Processing complete:\n", responseData); } }; await checkStatus(); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Statuses
StatusDescription
queuedJob is waiting in queue
generatingVideo is being generated
completedGeneration successful, video available
errorGeneration failed, check error field
Response {% code overflow="wrap" %} ```json5 {'id': '-G3b3vVKdoC42fMNeK1T5', 'status': 'queued', 'meta': {'usage': {'credits_used': 2000000}}} Generation ID: -G3b3vVKdoC42fMNeK1T5 Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': '-G3b3vVKdoC42fMNeK1T5', 'status': 'succeeded', 'video': {'url': 'https://cdn.aimlapi.com/panda/pixverse%2Fmp4%2Fmedia%2Fweb%2Fori%2FE5tYBt2VcrPauFryGrbzL_seed685977357.mp4'}} ``` {% endcode %}
**Processing time**: \~1 min 18 sec. **Generated video** (1280x720, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/veed.md # VEED - [fabric-1.0](/api-references/video-models/veed/fabric-1.0.md) - [fabric-1.0-fast](/api-references/video-models/veed/fabric-1.0-fast.md) --- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-1-first-last-image-to-video-fast.md # Veo 3.1 Fast (First-Last-Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.1-first-last-image-to-video-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 8-second 720p and 1080p videos with detailed visuals and audio, offering multiple styles and even dialogue support. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.1-first-last-image-to-video-fast"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"URL of the input image to animate. Should be 720p or higher resolution."},"last_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt","image_url","last_image_url"],"title":"google/veo-3.1-first-last-image-to-video-fast"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo-3.1-first-last-image-to-video-fast", "prompt": "A woman looks into the camera, breathes in, then exclaims energetically, 'Hello world!'", "image_url": "https://storage.googleapis.com/falserverless/example_inputs/veo31-flf2v-input-1.jpeg", "last_image_url": "https://storage.googleapis.com/falserverless/example_inputs/veo31-flf2v-input-2.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "google/veo-3.1-first-last-image-to-video-fast", prompt: 'A woman looks into the camera, breathes in, then exclaims energetically, "Hello world!"', image_url: 'https://storage.googleapis.com/falserverless/example_inputs/veo31-flf2v-input-1.jpeg', last_image_url: 'https://storage.googleapis.com/falserverless/example_inputs/veo31-flf2v-input-2.jpeg', }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 20f8ba41-0832-4a9c-ae45-a6c476ceb279:google/veo-3.1-first-last-image-to-video-fast Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: {'id': '20f8ba41-0832-4a9c-ae45-a6c476ceb279:google/veo-3.1-first-last-image-to-video-fast', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/panda/zQs0rII39VlDmGgsa38ZH_output.mp4'}} ``` {% endcode %}
**Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-1-first-last-image-to-video.md # Veo 3.1 (First-Last-Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.1-first-last-image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 8-second 720p and 1080p videos with detailed visuals and audio, offering multiple styles and even dialogue support. It also allows specifying the first and last frames of the video. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.1-first-last-image-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"URL of the input image to animate. Should be 720p or higher resolution."},"last_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt","image_url","last_image_url"],"title":"google/veo-3.1-first-last-image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo-3.1-first-last-image-to-video", "prompt": "A woman looks into the camera, breathes in, then exclaims energetically, 'Hello world!'", "image_url": "https://storage.googleapis.com/falserverless/example_inputs/veo31-flf2v-input-1.jpeg", "last_image_url": "https://storage.googleapis.com/falserverless/example_inputs/veo31-flf2v-input-2.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "google/veo-3.1-first-last-image-to-video", prompt: 'A woman looks into the camera, breathes in, then exclaims energetically, "Hello world!"', image_url: 'https://storage.googleapis.com/falserverless/example_inputs/veo31-flf2v-input-1.jpeg', last_image_url: 'https://storage.googleapis.com/falserverless/example_inputs/veo31-flf2v-input-2.jpeg', }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 9fa414f4-9733-46c8-a88c-ae46206f5e47:google/veo-3.1-first-last-image-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: {'id': '9fa414f4-9733-46c8-a88c-ae46206f5e47:google/veo-3.1-first-last-image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/elephant/Qa7slw4Wgl2g4n3jTWnAY_output.mp4'}} ``` {% endcode %}
**Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-1-image-to-video-fast.md # Veo 3.1 Fast (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.1-i2v-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 8-second 720p and 1080p videos with detailed visuals and audio, offering multiple styles and even dialogue support. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.1-i2v-fast"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"URL of the input image to animate. Should be 720p or higher resolution."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt","image_url"],"title":"google/veo-3.1-i2v-fast"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo-3.1-i2v-fast", "prompt": "The woman puts on glasses with her hands and then sighs and says slowly: 'Well...'.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "google/veo-3.1-i2v-fast", prompt: "The woman puts on glasses with her hands and then sighs and says slowly: 'Well...'.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 812dcc2d-05ba-4ea4-bd79-062be27269c3:google/veo-3.1-i2v-fast Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: {'id': '812dcc2d-05ba-4ea4-bd79-062be27269c3:google/veo-3.1-i2v-fast', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/tiger/Sp-Wjt6lWhS5VMK-JlmFH_output.mp4'}} ``` {% endcode %}
**Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-1-image-to-video.md # Veo 3.1 (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.1-i2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 8-second 720p and 1080p videos with detailed visuals and audio, offering multiple styles and even dialogue support. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.1-i2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"URL of the input image to animate. Should be 720p or higher resolution."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt","image_url"],"title":"google/veo-3.1-i2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. {% hint style="warning" %} Generation may take around 80-100 seconds for a 8-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo-3.1-i2v", "prompt": "The woman puts on glasses with her hands and then sighs and says slowly: 'Well...'.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "google/veo-3.1-i2v", prompt: "The woman puts on glasses with her hands and then sighs and says slowly: 'Well...'.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: dfb91aa0-6c60-4078-b3b5-73142ba4d853:google/veo-3.1-i2v Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: {'id': 'dfb91aa0-6c60-4078-b3b5-73142ba4d853:google/veo-3.1-i2v', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/penguin/ul8_jV9tXhk683C5byiHN_output.mp4'}} ``` {% endcode %}
**Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-1-reference-to-video.md # Veo 3.1 (Reference-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.1-reference-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 8-second 720p and 1080p videos with detailed visuals and audio, offering multiple styles and even dialogue support. It also supports multiple reference images for video generation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.1-reference-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"description":"URL of the input image to animate. Should be 720p or higher resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[8]},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt","image_urls"],"title":"google/veo-3.1-reference-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo-3.1-reference-to-video", "prompt": "A graceful ballerina dancing outside a circus tent on green grass, with colorful wildflowers swaying around her as she twirls and poses in the meadow.", "image_urls": [ "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-1.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-2.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-3.png" ] } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "google/veo-3.1-reference-to-video", prompt: 'A graceful ballerina dancing outside a circus tent on green grass, with colorful wildflowers swaying around her as she twirls and poses in the meadow.', image_urls: [ 'https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-1.png', 'https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-2.png', 'https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-3.png', ], }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 4ef07b4c-adf2-4439-9a1c-2d3b67f1c0c4:google/veo-3.1-reference-to-video Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: {'id': '4ef07b4c-adf2-4439-9a1c-2d3b67f1c0c4:google/veo-3.1-reference-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/kangaroo/pGAX6W5_rbZRSV1tbmAvq_output.mp4'}} ``` {% endcode %}
**Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-1-text-to-video-fast.md # Veo 3.1 Fast (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.1-t2v-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 4, 6, 8-second 720p and 1080p videos with detailed visuals and audio. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.1-t2v-fast"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"auto_fix":{"type":"boolean","default":true,"description":"Whether to automatically attempt to fix prompts that fail content policy or other validation checks by rewriting them."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enhance the video generation."}},"required":["model","prompt"],"title":"google/veo-3.1-t2v-fast"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo-3.1-t2v-fast", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "google/veo-3.1-t2v-fast", prompt: "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 8e9c4413-14db-454e-b7c0-6e8cd6d25418:google/veo-3.1-t2v-fast Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: {'id': '8e9c4413-14db-454e-b7c0-6e8cd6d25418:google/veo-3.1-t2v-fast', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/lion/b4MqgPJ6bc8pb--uNYD_M_output.mp4'}} ``` {% endcode %}
**Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-1-text-to-video.md # Veo 3.1 (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.1-t2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 4, 6, 8-second 720p and 1080p videos with detailed visuals and audio. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls.
## API Schemas ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.1-t2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"auto_fix":{"type":"boolean","default":true,"description":"Whether to automatically attempt to fix prompts that fail content policy or other validation checks by rewriting them."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enhance the video generation."}},"required":["model","prompt"],"title":"google/veo-3.1-t2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo-3.1-t2v", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; // Creating and sending a video generation task to the server async function generateVideo() { const url = "https://api.aimlapi.com/v2/video/generations"; const data = { model: "google/veo-3.1-t2v", prompt: "A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming.", }; try { const response = await fetch(url, { method: "POST", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, body: JSON.stringify(data), }); if (!response.ok) { const errorText = await response.text(); console.error(`Error: ${response.status} - ${errorText}`); return null; } const responseData = await response.json(); console.log(responseData); return responseData; } catch (error) { console.error("Request failed:", error); return null; } } // Requesting the result of the task from the server using the generation_id async function getVideo(genId) { const url = new URL("https://api.aimlapi.com/v2/video/generations"); url.searchParams.append("generation_id", genId); try { const response = await fetch(url, { method: "GET", headers: { Authorization: `Bearer ${apiKey}`, "Content-Type": "application/json", }, }); return await response.json(); } catch (error) { console.error("Error fetching video:", error); return null; } } // Initiates video generation and checks the status every 10 seconds until completion or timeout async function main() { const genResponse = await generateVideo(); if (!genResponse) return; const genId = genResponse.id; console.log("Generation ID:", genId); if (genId) { const timeout = 600 * 1000; // 10 minutes const startTime = Date.now(); while (Date.now() - startTime < timeout) { const responseData = await getVideo(genId); if (!responseData) { console.error("Error: No response from API"); break; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); await new Promise((resolve) => setTimeout(resolve, 10000)); } else { console.log("Processing complete:\n", responseData); return responseData; } } console.log("Timeout reached. Stopping."); } } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 524671b1-c89a-4fb2-b731-b52111a9dcbe:google/veo-3.1-t2v Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: {'id': '524671b1-c89a-4fb2-b731-b52111a9dcbe:google/veo-3.1-t2v', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/monkey/sv_xtfXXGeG1dCP2tmo36_output.mp4'}} ``` {% endcode %}
**Generated video** (1280x720, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-fast-image-to-video.md # Veo 3 Fast (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.0-i2v-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 8-second 720p and 1080p videos with detailed visuals and audio. Optimized for speed and cost compared to the [Veo 3 (Image-to-Video)](https://docs.aimlapi.com/api-references/video-models/google/veo-3-image-to-video) model. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.0-i2v-fast"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"resolution":{"type":"string","enum":["720P","1080P"],"default":"720P"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enhance the video generation."},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt","image_url"],"title":"google/veo-3.0-i2v-fast"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/google/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "google/veo-3.0-i2v-fast", "prompt": "The woman puts on glasses with her hands and then sighs and says slowly: 'Well...'.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/google/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "google/veo-3.0-i2v-fast", prompt: "The woman puts on glasses with her hands and then sighs and says slowly: 'Well...'.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: "5", }); const url = new URL(`${baseUrl}/generate/video/google/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/google/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 s until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: e0ab34a6-1d6b-4b60-99cc-de15e210dbfc:veo-3.0-fast-generate-001 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'status': 'completed', 'id': 'e0ab34a6-1d6b-4b60-99cc-de15e210dbfc:veo-3.0-fast-generate-001', 'video': {'url': 'https://cdn.aimlapi.com/generations/guepard/1754988463411-e17d66b5-57c1-4813-a4ed-3a7aa41723f9.mp4'}} ``` {% endcode %}
**Original** (with the audio): [1280x720](https://drive.google.com/file/d/1_ine40cN16QmRUzUks_SZ7HjryWHK3Af/view?usp=sharing) **Low-res GIF preview**:

"The woman puts on glasses with her hands and then sighs and says slowly: 'Well...'."

--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-fast-text-to-video.md # Veo 3 Fast (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.0-fast` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 8-second 720p and 1080p videos with detailed visuals and audio. Optimized for speed and cost compared to the [Veo 3 (Text-to-Video)](https://docs.aimlapi.com/api-references/video-models/google/veo3-text-to-video) model. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.0-fast"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"resolution":{"type":"string","enum":["720P","1080P"],"default":"720P"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enhance the video generation."},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt"],"title":"google/veo-3.0-fast"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="warning" %} This model produces highly detailed and natural-looking videos, so generation may take around 2 minutes for a 8-second video with audio. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/google/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo-3.0-fast", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/google/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "google/veo-3.0-fast", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/google/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/google/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: 083c6067-0bc6-464a-943b-930b1eb1753b:veo-3.0-fast-generate-001 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'status': 'completed', 'id': '083c6067-0bc6-464a-943b-930b1eb1753b:veo-3.0-fast-generate-001', 'video': {'url': 'https://cdn.aimlapi.com/generations/guepard/1754989116527-98dac026-6570-4d01-8a93-4234dc699032.mp4'}} ``` {% endcode %}
**Original** (with the audio): [1280x720](https://drive.google.com/file/d/1D66CSzwl-hOzKuozQGK0_6WGQdurlG5G/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain, then rushes
toward the camera with its jaws open, revealing massive fangs. We see it's coming."

--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3-image-to-video.md # Veo 3 (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo-3.0-i2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates realistic 8-second 720p and 1080p videos with detailed visuals and audio, offering multiple styles and even dialogue support. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo-3.0-i2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"resolution":{"type":"string","enum":["720P","1080P"],"default":"720P"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enhance the video generation."},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt","image_url"],"title":"google/veo-3.0-i2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/google/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "google/veo-3.0-i2v", "prompt": "First, The woman silently puts on glasses with her hands. Then she sighs. After that she says once slowly: 'Well...'.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/google/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "google/veo-3.0-i2v", prompt: "First, The woman silently puts on glasses with her hands. Then she sighs. After that she says once slowly: 'Well...'.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: "5", }); const url = new URL(`${baseUrl}/generate/video/google/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/google/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 s until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: cb21d13b-0a63-4713-81d3-90783e7f83dc:veo-3.0-generate-001 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'status': 'completed', 'id': 'cb21d13b-0a63-4713-81d3-90783e7f83dc:veo-3.0-generate-001', 'video': {'url': 'https://cdn.aimlapi.com/generations/guepard/1754992616167-0c4f83fc-6b2c-47ea-a3b6-e9f002990b16.mp4'}} ``` {% endcode %}
**Original** (with the audio): [1280x720](https://drive.google.com/file/d/1p138X1qXIKLavdx5ebmwqfs7Xog06jq-/view?usp=sharing) **Low-res GIF preview**:

"First, The woman silently puts on glasses with her hands.
Then she sighs. After that she says once slowly: 'Well...'."

--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3.1-extend-video.md # Veo 3.1 Extend Video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo3-1-extend-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo3-1-extend-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"aspect_ratio":{"type":"string","enum":["auto","16:9","9:16"],"default":"auto","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[7],"default":"7"},"resolution":{"type":"string","enum":["720p"],"default":"720p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."},"auto_fix":{"type":"boolean","default":false,"description":"Whether to automatically attempt to fix prompts that fail content policy or other validation checks by rewriting them."}},"required":["model","prompt","video_url"],"title":"google/veo3-1-extend-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above. If the video generation task status is `complete`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "google/veo3-1-extend-video", "prompt": ''' Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots. ''', "video_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "google/veo3-1-extend-video", prompt: ` Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots.`, video_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4', }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: QWcoGFfDtx2Th5tCkeilb Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'QWcoGFfDtx2Th5tCkeilb', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a8a55de/XVKLuDxaCbYx6phcZoFf5_67d9400344f14a3f8567108c5decd774.mp4'}} ``` {% endcode %}
**Processing time**: \~ 2 min 56 sec. **Generated video** (1280x720, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo-3.1-fast-extend-video.md # Veo 3.1 Fast Extend Video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo3-1-fast-extend-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo3-1-fast-extend-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"aspect_ratio":{"type":"string","enum":["auto","16:9","9:16"],"default":"auto","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[7],"default":"7"},"resolution":{"type":"string","enum":["720p"],"default":"720p"},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."},"auto_fix":{"type":"boolean","default":false,"description":"Whether to automatically attempt to fix prompts that fail content policy or other validation checks by rewriting them."}},"required":["model","prompt","video_url"],"title":"google/veo3-1-fast-extend-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `complete`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "google/veo3-1-fast-extend-video", "prompt": ''' Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots. ''', "video_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "google/veo3-1-fast-extend-video", prompt: ` Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots.`, video_url: 'https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4', }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: rYEd6S2v_DryemDWteYWW Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'rYEd6S2v_DryemDWteYWW', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a8a55c7/hH0ToQiWnbHd79jE2_eVs_e88ad244c0044ca0b0c0860f1867dd37.mp4'}} ``` {% endcode %}
**Processing time**: \~ 2 min 23 sec. **Generated video** (1280x720, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo2-image-to-video.md # Veo 2 (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `veo2/image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An advanced multimodal (image + text) AI model that transforms static images into high-quality, dynamic video content. It builds upon the success of Google's [Veo2 text-to-video](https://docs.aimlapi.com/api-references/video-models/google/veo2-text-to-video) model, offering unprecedented control and realism in video generation from still images, faithful content preservation from source images, and intuitive motion generation with physics-aware movement. ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["veo2/image-to-video"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"tail_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,6,7,8],"default":"5"},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enhance the video generation."}},"required":["model","prompt","image_url"],"title":"veo2/image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. {% hint style="info" %} Generation may take around 40-50 seconds for a 5-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/google/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "veo2/image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/google/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "veo2/image-to-video", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", duration: "5", }); const url = new URL(`${baseUrl}/generate/video/google/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/google/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 s until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: 9bf4e6f6-dae7-41c7-94aa-354443b300c6:veo2/image-to-video Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '812dbc37-f15a-46a4-a058-8477bd243f5a:veo2/image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/rabbit/D7LD6oJwKMF9gk9KjqwDV_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 6827705}} ``` {% endcode %}
**Original**: [1280x720](https://drive.google.com/file/d/19p8OlNOWJrJN9Z6KFTzloQ4ZAAmQIaDu/view?usp=sharing) **Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo2-text-to-video.md # Veo 2 (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `veo2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Google’s cutting-edge AI model designed to generate highly realistic and cinematic video content from textual prompts or a combination of text and images. Leveraging advanced machine learning techniques, Veo2 excels in creating videos with natural motion, realistic physics, and professional-grade visual fidelity. ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt, the aspect ratio, and the desired duration (5, 6, 7, or 8 seconds). ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["veo2"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,6,7,8],"default":"5"},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enhance the video generation."}},"required":["model","prompt"],"title":"veo2"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="info" %} Generation may take around 40-50 seconds for a 5-second video. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/google/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "veo2", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/google/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Gen_ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "veo2", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/google/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/google/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: e4d3af90-f643-44d0-9dcc-95c5b07f4bbf:veo2 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete: { id: 'e4d3af90-f643-44d0-9dcc-95c5b07f4bbf:veo2', status: 'completed', video: { url: 'https://cdn.aimlapi.com/eagle/files/kangaroo/4zOxWejQAux5b9EgeeNHV_output.mp4', content_type: 'video/mp4', file_name: 'output.mp4', file_size: 2657506 } } ``` {% endcode %}
**Original (with sound)**: [1280x720](https://drive.google.com/file/d/1Xh3IMCeSRzMbaZ8Utnfoinrl3azkwa7w/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain,
then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming."

--- # Source: https://docs.aimlapi.com/api-references/video-models/google/veo3-text-to-video.md # Veo 3 (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `google/veo3` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model generates high-quality short videos from text or image prompts with significant advancements over its predecessor, [Veo2](https://docs.aimlapi.com/api-references/video-models/google/veo2-text-to-video). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server You can generate a video using this API. In the basic setup, you only need a prompt. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["google/veo3"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[4,6,8],"default":"8"},"aspect_ratio":{"type":"string","enum":["16:9","9:16"],"description":"The aspect ratio of the generated video."},"resolution":{"type":"string","enum":["720P","1080P"],"default":"720P"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enhance the video generation."},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt"],"title":"google/veo3"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% hint style="warning" %} This model produces highly detailed and natural-looking videos, so generation may take around 2 minutes for a 8-second video with audio. {% endhint %} {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time base_url = "https://api.aimlapi.com/v2" # Insert your AIML API Key instead of : aimlapi_key = "" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/google/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "google/veo3", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/google/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Gen_ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "google/veo3", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/google/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/google/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Gen_ID: df43b636-6b09-4b6d-bbd2-f710ac0a3cfd:veo3 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'df43b636-6b09-4b6d-bbd2-f710ac0a3cfd:veo3', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/eagle/files/tiger/R6LdSHLx937-O7ra31ySQ_output.mp4', 'content_type': 'video/mp4', 'file_name': 'output.mp4', 'file_size': 3716781}} ``` {% endcode %}
**Original (with sound)**: [1280x720](https://drive.google.com/file/d/1P9ZTV332Op4nhXM_KM4imdyOI-9Irl58/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming."

--- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/microsoft/vibevoice-1.5b.md # vibevoice-1.5b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `microsoft/vibevoice-1.5b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Designed to produce rich, multi-speaker conversations from text, the model is well-suited for podcasts and other long-form audio content. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["microsoft/vibevoice-1.5b"]},"script":{"type":"string","minLength":1,"maxLength":5000,"description":"The script to convert to speech. Can be formatted with \"Speaker X:\" prefixes for multi-speaker dialogues."},"speakers":{"type":"array","items":{"type":"object","properties":{"preset":{"type":"string","enum":["Alice [EN]","Alice [EN] (Background Music)","Carter [EN]","Frank [EN]","Maya [EN]","Anchen [ZH] (Background Music)","Bowen [ZH]","Xinran [ZH]"],"description":"Default voice preset to use for the speaker. Not used if audio_url is provided."},"audio_url":{"type":"string","format":"uri","description":"URL to a voice sample audio file. If provided, preset will be ignored."}}},"minItems":1,"maxItems":4,"default":[{"preset":"Alice [EN]"}],"description":"List of speakers to use for the script. If not provided, will be inferred from the script or voice samples."},"seed":{"type":"integer","description":"If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed."},"cfg_scale":{"type":"number","minimum":0.1,"maximum":2,"default":1.3,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."}},"required":["model","script"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { "Authorization": "Bearer ", } payload = { "model": "microsoft/vibevoice-1.5b", "script": "Speaker 1: Wow, whats happening, Alice? \nSpeaker 2: Oh, just the usual… a full-blown AI revolution. Nothing to worry about", "speakers": [ { "preset": "Frank [EN]" }, { "preset": "Alice [EN]" } ] } try: response = requests.post(url, headers=headers, json=payload) response.raise_for_status() response_data = response.json() audio_url = response_data["audio"]["url"] file_name = response_data["audio"]["file_name"] audio_response = requests.get(audio_url, stream=True) audio_response.raise_for_status() # Save with the original file extension from the API # dist = os.path.join(os.path.dirname(__file__), file_name) # if you run this code as a .py file dist = "audio.wav" # if you run this code in Jupyter Notebook with open(dist, "wb") as write_stream: for chunk in audio_response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", dist) print(f"Duration: {response_data['duration']} seconds") print(f"Sample rate: {response_data['sample_rate']} Hz") except requests.exceptions.RequestException as e: print(f"Error making request: {e}") except Exception as e: print(f"Error: {e}") if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response ``` Audio saved to: audio.wav Duration: 8.4 seconds Sample rate: 24000 Hz ```
Listen to the dialogue we generated: {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/text-to-speech/microsoft/vibevoice-7b.md # vibevoice-7b {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `microsoft/vibevoice-7b` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} Designed to produce rich, multi-speaker conversations from text, the model is well-suited for podcasts and other long-form audio content. The 7-billion-parameter version of the model. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/tts > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.TextToSpeechResponse":{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string"},"request_id":{"type":"string"},"sha256":{"type":"string"},"created":{"type":"string","format":"date-time"},"duration":{"type":"number"},"channels":{"type":"number"},"models":{"type":"array","items":{"type":"string"}},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string"},"version":{"type":"string"},"arch":{"type":"string"}},"required":["name","version","arch"]}}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"]}},"required":["metadata"]}}},"paths":{"/v1/tts":{"post":{"operationId":"VoiceModelsController_textToSpeech_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["microsoft/vibevoice-7b"]},"script":{"type":"string","minLength":1,"maxLength":5000,"description":"The script to convert to speech. Can be formatted with \"Speaker X:\" prefixes for multi-speaker dialogues."},"speakers":{"type":"array","items":{"type":"object","properties":{"preset":{"type":"string","enum":["Alice [EN]","Alice [EN] (Background Music)","Carter [EN]","Frank [EN]","Maya [EN]","Anchen [ZH] (Background Music)","Bowen [ZH]","Xinran [ZH]"],"description":"Default voice preset to use for the speaker. Not used if audio_url is provided."},"audio_url":{"type":"string","format":"uri","description":"URL to a voice sample audio file. If provided, preset will be ignored."}}},"minItems":1,"maxItems":4,"default":[{"preset":"Alice [EN]"}],"description":"List of speakers to use for the script. If not provided, will be inferred from the script or voice samples."},"seed":{"type":"integer","description":"If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same seed and parameters should return the same result. Determinism is not guaranteed."},"cfg_scale":{"type":"number","minimum":0.1,"maximum":2,"default":1.3,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."}},"required":["model","script"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.TextToSpeechResponse"}}}}},"tags":["Voice Models"]}}}} ``` ## Code Example {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import os import requests def main(): url = "https://api.aimlapi.com/v1/tts" headers = { "Authorization": "Bearer ", } payload = { "model": "microsoft/vibevoice-7b", "script": "Speaker 1: Wow, whats happening, Alice? \nSpeaker 2: Oh, just the usual… a full-blown AI revolution. Nothing to worry about", "speakers": [ { "preset": "Frank [EN]" }, { "preset": "Alice [EN]" } ] } try: response = requests.post(url, headers=headers, json=payload) response.raise_for_status() response_data = response.json() audio_url = response_data["audio"]["url"] file_name = response_data["audio"]["file_name"] audio_response = requests.get(audio_url, stream=True) audio_response.raise_for_status() # Save with the original file extension from the API # dist = os.path.join(os.path.dirname(__file__), file_name) # if you run this code as a .py file dist = "audio.wav" # if you run this code in Jupyter Notebook with open(dist, "wb") as write_stream: for chunk in audio_response.iter_content(chunk_size=8192): if chunk: write_stream.write(chunk) print("Audio saved to:", dist) print(f"Duration: {response_data['duration']} seconds") print(f"Sample rate: {response_data['sample_rate']} Hz") except requests.exceptions.RequestException as e: print(f"Error making request: {e}") except Exception as e: print(f"Error: {e}") if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response ``` Audio saved to: audio.wav Duration: 7.866666666666666 seconds Sample rate: 24000 Hz ```
--- # Source: https://docs.aimlapi.com/api-references/video-models/minimax/video-01-live2d.md # video-01-live2d {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `video-01-live2d` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An innovative AI model designed for generating high-quality videos from text prompts or image. This model can produce visually striking content with cinematic qualities, allowing users to create engaging videos quickly and efficiently. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["video-01-live2d"]},"prompt":{"type":"string","maxLength":2000,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the first frame for the video.\nImage specifications: \n- format must be JPG, JPEG, or PNG; \n- aspect ratio should be greater than 2:5 and less than 5:2; \n- the shorter side must exceed 300 pixels; \n- file size must not exceed 20MB.","required":true},"enhance_prompt":{"type":"boolean","default":true,"description":"If True, the incoming prompt will be automatically optimized to improve generation quality when needed. For more precise control, set it to False — the model will then follow the instructions more strictly."}},"required":["model","prompt","image_url"],"title":"video-01-live2d"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"get":{"operationId":"VideoControllerV2_pollVideo_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":"Successfully generated video","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Video.v2.PollVideoResponseDTO"}}}}},"tags":["Video Models"]}}},"components":{"schemas":{"Video.v2.PollVideoResponseDTO":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."},"duration":{"type":"number","nullable":true,"description":"The duration of the video."}},"required":["url"]},"duration":{"type":"number","nullable":true,"description":"The duration of the video."},"error":{"nullable":true,"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server We have a classic [reproduction](https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg) of the famous da Vinci painting. Let's ask the model to generate a video where the Mona Lisa puts on glasses. Generation may take around 3 minutes for a 6-second video. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/minimax/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "video-01-live2d", "prompt": "Mona Lisa puts on glasses with her hands.", "first_frame_image": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() # print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/minimax/generation" params = { "generation_id": gen_id, } # Insert your AIML API Key instead of : headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) # print("Generation:", response.json()) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("generation_id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "video-01-live2d", prompt: "Mona Lisa puts on glasses with her hands.", first_frame_image: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", }); const url = new URL(`${baseUrl}/generate/video/minimax/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/minimax/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 s until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.generation_id) { console.error("Failed to start generation"); return; } const genId = genResponse.generation_id; console.log("Generation ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 288439434137694 Status: queued Still waiting... Checking again in 10 seconds. Status: queued Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '288439434137694', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/whale/inference_output%2Fvideo%2F2025-07-08%2Fd1626f4f-be9c-4aca-87da-5b749efcdef7%2Foutput.mp4?Expires=1752005613&OSSAccessKeyId=LTAI5tAmwsjSaaZVA6cEFAUu&Signature=5guXof04YOOgZPBhkeklSFY5gqM%3D'}} ``` {% endcode %}
Generated Video **Original**: [720x1072](https://drive.google.com/file/d/1AnWhtqf-_DI9f7B3wkhIKAa1YmbE06ns/view?usp=sharing) **Low-res GIF preview**:
--- # Source: https://docs.aimlapi.com/api-references/video-models/minimax/video-01.md # video-01 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `video-01` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} An innovative AI model designed for generating high-quality videos from text prompts or image. Developed by Hailou AI, this model can produce visually striking content with cinematic qualities, allowing users to create engaging videos quickly and efficiently. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["video-01"]},"prompt":{"type":"string","maxLength":2000,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the first frame for the video.\nImage specifications: \n- format must be JPG, JPEG, or PNG; \n- aspect ratio should be greater than 2:5 and less than 5:2; \n- the shorter side must exceed 300 pixels; \n- file size must not exceed 20MB."},"enhance_prompt":{"type":"boolean","default":true,"description":"If True, the incoming prompt will be automatically optimized to improve generation quality when needed. For more precise control, set it to False — the model will then follow the instructions more strictly."}},"required":["model","prompt"],"title":"video-01"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"get":{"operationId":"VideoControllerV2_pollVideo_v2","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"description":"Successfully generated video","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Video.v2.PollVideoResponseDTO"}}}}},"tags":["Video Models"]}}},"components":{"schemas":{"Video.v2.PollVideoResponseDTO":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."},"duration":{"type":"number","nullable":true,"description":"The duration of the video."}},"required":["url"]},"duration":{"type":"number","nullable":true,"description":"The duration of the video."},"error":{"nullable":true,"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"tokens_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["tokens_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}} ``` --- # Source: https://docs.aimlapi.com/api-references/video-models.md # Video Models ## Overview With our API you can generate videos from your prompt and imagination. We support multiple video models. You can find the [complete list](#all-available-video-models-1) along with API reference links at the end of the page. ## Example
Full example explanation As an example, we will generate a video using the popular **video-01** model from the Chinese company **MiniMax**. This model, as you can verify by checking its [**API Reference**](https://docs.aimlapi.com/api-overview/video-models/minimax-video), accepts an image as input (serving as the first frame of the future video) along with a text prompt, where we can describe what should happen to this image throughout the video. We used a publicly available [image](https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Cheetah4.jpg/1200px-Cheetah4.jpg) from Wikimedia and described in the prompt that the cheetah turns toward the camera. A notable feature of **video-01** model is that video generation and retrieving the final video file from the server are done through separate API calls. *(AIML API tokens are only consumed during the first step—i.e., the actual video generation.)* You can insert the contents of each of the two code blocks into a separate Python file in your preferred development environment (or, for example, place each part in a separate cell in **Jupyter Notebook**). Replace `` in both fragments with the **AIML API Key** obtained from your [account](https://aimlapi.com/app/keys). Next, run the first code block. If everything is set up correctly, you will see the following line in the program output (the specific numbers, of course, will vary):\ `Generation: {'generation_id': '234954179076239'}` This means that our generation has been queued on the server. Now, copy this numerical value (*without* quotation marks) and insert it into the second code block, replacing ``. Now, we can execute the second code block to request our final video file from the server. Processing the request on the server may take some time (usually less than a minute). If the requested file is not yet ready, the output will display the corresponding status. Try waiting a bit and rerun the second code block. *(If you're comfortable with coding, you can modify the script to perform this request inside a loop.)* In our case, after three reruns of the second code block (waiting a total of about 20 seconds), we saw the following output: {% code overflow="wrap" %} ```json Generation: {'id': '234954179076239', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/whale/inference_output%2Fvideo%2F2025-02-07%2F0c4d54db-da1b-404a-a495-600426796415%2Foutput.mp4?Expires=1738947643&OSSAccessKeyId=LTAI5tAmwsjSaaZVA6cEFAUu&Signature=mo3sfeNpVz5mNQW%2BSt2g8d2%2Fvf4%3D'}} ``` {% endcode %} As you can see, the `'status'` is now `'completed'`, and further in the output line, we have a URL where the generated video file can be downloaded. Here is the resulting turning cheetah ([original 960x720px](https://drive.google.com/file/d/1T06W3BGZ_HanpkN-_lvr7U9HRH7IHG9C/view?usp=sharing)):
The first code block (generation): {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests def main(): url = "https://api.aimlapi.com/v2/generate/video/minimax/generation" payload = { "model": "video-01", "prompt": "Cheetah turns toward the camera.", "first_frame_image": "https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Cheetah4.jpg/1200px-Cheetah4.jpg", } # Insert your AIML API Key instead of : headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.post(url, json=payload, headers=headers) print("Generation:", response.json()) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %} The second code block (retrieving the generated video file from the server): {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests def main(): url = "https://api.aimlapi.com/v2/generate/video/minimax/generation" params = { # Insert the generation_id (that was returned by the generation part above) in the quotation marks instead of : "generation_id": "", } # Insert your AIML API Key instead of : headers = {"Authorization": "Bearer ", "Content-Type": "application/json"} response = requests.get(url, params=params, headers=headers) print("Generation:", response.json()) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %} ## All Available Video Models
Model ID + API Reference linkDeveloperContextModel Card
alibaba/wan2.1-t2v-plusAlibaba CloudWan2.1 Plus
alibaba/wan2.1-t2v-turboAlibaba CloudWan2.1 Turbo
alibaba/wan2.2-t2v-plusAlibaba CloudWan 2.2 T2V
alibaba/wan2.5-t2v-previewAlibaba CloudWan 2.5 Text-to-Video
alibaba/wan2.5-i2v-previewAlibaba CloudWan 2.5 Image-to-Video
alibaba/wan2.2-14b-animate-replaceAlibaba CloudWan 2.2 14b animate replace
alibaba/wan2.2-14b-animate-moveAlibaba CloudWan 2.2 14b animate move
alibaba/wan2.2-vace-fun-a14b-reframeAlibaba CloudWan 2.2 vace fun 14b reframe
alibaba/wan2.2-vace-fun-a14b-outpaintingAlibaba CloudWan 2.2 vace fun 14b outpainting
alibaba/wan2.2-vace-fun-a14b-inpaintingAlibaba CloudWan 2.2 vace fun 14b inpainting
alibaba/wan2.2-vace-fun-a14b-poseAlibaba CloudWan 2.2 vace fun 14b pose
alibaba/wan2.2-vace-fun-14b-depthAlibaba CloudWan 2.2 vace fun 14b depth
alibaba/wan2.5-t2v-previewAlibaba CloudWan 2.5 Preview
alibaba/wan2.5-i2v-previewAlibaba Cloud-
alibaba/wan-2-6-t2vAlibaba CloudWan 2.6 Text-to-Video
alibaba/wan-2-6-i2vAlibaba CloudWan 2.6 Image-to-Video
alibaba/wan-2-6-r2vAlibaba CloudWan 2.6 Reference-to-Video
bytedance/seedance-1-0-lite-t2vByteDanceSeedance 1.0 lite Text to Video
bytedance/seedance-1-0-lite-i2vByteDanceSeedance 1.0 lite Image to Video
bytedance/seedance-1-0-pro-t2vByteDanceSeedance 1.0 Pro
bytedance/seedance-1-0-pro-i2vByteDanceSeedance 1.0 Pro
bytedance/seedance-1-0-pro-fastByteDanceSeedance 1.0 Pro Fast
bytedance/omnihumanByteDanceOmniHuman
bytedance/omnihuman/v1.5ByteDanceOmniHuman v1.5
veo2GoogleVeo2 Text-to-Video
veo2/image-to-videoGoogleVeo2 Image-to-Video
google/veo3GoogleVeo 3
google/veo-3.0-i2vGoogleVeo 3 I2V
google/veo-3.0-fastGoogleVeo 3 Fast
google/veo-3.0-i2v-fastGoogleVeo 3 I2V Fast
google/veo-3.1-t2vGoogleVeo 3.1 Text-to-Video
google/veo-3.1-t2v-fastGoogleVeo 3.1 Fast Text-to-Video
google/veo-3.1-i2vGoogleVeo 3.1 Image-to-Video
google/veo-3.1-i2v-fastGoogleVeo 3.1 Fast Image-to-Video
google/veo-3.1-reference-to-videoGoogleVeo 3.1 Reference-to-Video
google/veo-3.1-first-last-image-to-videoGoogleVeo 3.1 First-Last Frame-to-Video
google/veo-3.1-first-last-image-to-video-fastGoogleVeo 3.1 Fast First-Last Frame-to-Video
google/veo3-1-extend-videoGoogleVeo 3.1 Extend Video
google/veo3-1-fast-extend-videoGoogleVeo 3.1 Fast Extend Video
kling-video/v1/standard/image-to-videoKling AIKling AI (image-to-video)
kling-video/v1/standard/text-to-videoKling AIKling AI (text-to-video)
kling-video/v1/pro/image-to-videoKling AIKling AI (image-to-video)
kling-video/v1/pro/text-to-videoKling AIKling AI (text-to-video)
kling-video/v1.6/standard/text-to-videoKling AIKling 1.6 Standard
kling-video/v1.6/standard/image-to-videoKling AIKling 1.6 Standard
kling-video/v1.6/pro/image-to-videoKling AIKling 1.6 Pro
kling-video/v1.6/pro/text-to-videoKling AIKling 1.6 Pro
klingai/kling-video-v1.6-pro-effectsKling AIKling 1.6 Pro Effects
klingai/kling-video-v1.6-standard-effectsKling AIKling 1.6 Standard Effects
kling-video/v1.6/standard/multi-image-to-videoKling AIKling V1.6 Multi-Image-to-Video
klingai/v2-master-image-to-videoKling AIKling 2.0 Master
klingai/v2-master-text-to-videoKling AIKling 2.0 Master
kling-video/v2.1/standard/image-to-videoKling AIKling V2.1 Standard I2V
kling-video/v2.1/pro/image-to-videoKling AIKling V2.1 Pro I2V
klingai/v2.1-master-image-to-videoKling AIling 2.1 Master
klingai/v2.1-master-text-to-videoKling AIKling 2.1 Master
klingai/v2.5-turbo/pro/image-to-videoKling AIKling Video v2.5 Turbo Pro Image-to-Video
klingai/v2.5-turbo/pro/text-to-videoKling AIKling Video v2.5 Turbo Pro Text-to-Video
klingai/avatar-standardKling AIKling AI Avatar Standard
klingai/avatar-proKling AIKling AI Avatar Pro
klingai/video-v2-6-pro-text-to-videoKling AIKling 2.6 Pro Text-to-Video
klingai/video-v2-6-pro-image-to-videoKling AIKling 2.6 Pro Image-to-Video
klingai/video-o1-image-to-videoKling AIKling Video O1 Image to Video
klingai/video-o1-reference-to-videoKling AIKling Video O1 Reference-to-Video
klingai/video-o1-video-to-video-editKling AIKling Video O1 Video to Video Edit
klingai/video-o1-video-to-video-referenceKling AIKling Video O1 Video-to-Video Reference
klingai/video-v2-6-pro-motion-controlKling AIComing Soon
krea/krea-wan-14b/text-to-videoKreaKrea WAN 14B Text-to-Video
krea/krea-wan-14b/video-to-videoKreaKrea WAN 14B Video-to-Video
ltxv/ltxv-2LTXVComing Soon
ltxv/ltxv-2-fastLTXVComing Soon
luma/ray-2Luma AIRay 2
luma/ray-flash-2Luma AIRay Flash 2
magic/text-to-videoMagicMagic Video
magic/image-to-videoMagicMagic Video
magic/video-to-videoMagicMagic Video
video-01MiniMaxMiniMax Video-01
video-01-live2dMiniMax-
minimax/hailuo-02MiniMaxHailuo 02
minimax/hailuo-2.3MiniMaxHailuo 2.3
minimax/hailuo-2.3-fastMiniMaxHailuo 2.3 Fast
sora-2-t2vOpenAI-
sora-2-i2vOpenAI-
sora-2-pro-t2vOpenAI-
sora-2-pro-i2vOpenAI-
pixverse/v5/text-to-videoPixVersePixverse v5 Text-to-Video
pixverse/v5/image-to-videoPixVersePixverse v5 Image-to-Video
pixverse/v5/transitionPixVersePixverse v5 Transition
pixverse/v5-5-text-to-videoPixVersePixVerse V5.5 Text-to-Video
pixverse/v5-5-image-to-videoPixVersePixverse v5.5 Image-to-Video
pixverse/lip-syncPixVerseComing Soon
gen3a_turboRunwayRunway Gen-3 turbo
runway/gen4_turboRunwayRunway Gen-4 Turbo
runway/gen4_alephRunwayAleph
runway/act_twoRunwayRunway Act Two
sber-ai/kandinsky5-t2vSber AIKandinsky 5 Standard
sber-ai/kandinsky5-distill-t2vSber AIKandinsky 5 Distill
tencent/hunyuan-video-foleyTencentHunyuanVideo Foley
veed/fabric-1.0Veedfabric-1.0
veed/fabric-1.0-fastVeedfabric-1.0-fast
--- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/video-o1-image-to-video.md # o1/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/video-o1-image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A variant of Kling’s O1 omni-model that takes a reference image along with an instructional prompt as input. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls. ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/video-o1-image-to-video"]},"prompt":{"type":"string","maxLength":2500,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"last_image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image to be used as the last frame of the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"}},"required":["model","prompt","image_url"],"title":"klingai/video-o1-image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/video-o1-image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 # 1000 sec = 16 min 40 sec while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "klingai/video-o1-image-to-video", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec = 16 min 40 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: ee188bbb-47f1-41b1-b0d6-24ad799e3205:klingai/video-o1-image-to-video Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: { id: 'ee188bbb-47f1-41b1-b0d6-24ad799e3205:klingai/video-o1-image-to-video', status: 'completed', video: { url: 'https://cdn.aimlapi.com/flamingo/files/b/0a8740c6/mTBLg8suAYBJkqKr7u7_I_output.mp4' } } ``` {% endcode %}
**Processing time**: \~ 1 min 19 sec. **Generated video** (1920x1080, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/video-o1-reference-to-video.md # o1/reference-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/video-o1-reference-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A variant of Kling’s O1 omni-model that takes several reference images along with an instructional prompt as input. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls. ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/video-o1-reference-to-video"]},"prompt":{"type":"string","maxLength":2500,"description":"The text description of the scene, subject, or action to generate in the video."},"image_list":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":7,"description":"Array of image URLs for multi-image-to-video generation."},"elements":{"type":"array","items":{"type":"object","properties":{"reference_image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":4,"description":"Additional reference images from different angles."},"frontal_image_url":{"type":"string","format":"uri","description":"The frontal image of the element (main view)."}},"required":["reference_image_urls","frontal_image_url"]},"maxItems":4,"description":"Elements (characters/objects) to include in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"}},"required":["model","prompt"],"title":"klingai/video-o1-reference-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/video-o1-reference-to-video", "prompt": "A graceful ballerina dancing outside a circus tent on green grass, with colorful wildflowers swaying around her as she twirls and poses in the meadow.", "image_list": [ "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-1.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-2.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-3.png" ], "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "klingai/video-o1-reference-to-video", prompt: "A graceful ballerina dancing outside a circus tent on green grass, with colorful wildflowers swaying around her as she twirls and poses in the meadow.", image_list: [ "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-1.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-2.png", "https://storage.googleapis.com/falserverless/example_inputs/veo31-r2v-input-3.png" ], duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: a3f1e246-0831-4d6e-893a-990fb5c214ea:klingai/video-o1-reference-to-video Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'a3f1e246-0831-4d6e-893a-990fb5c214ea:klingai/video-o1-reference-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a8787d0/ERB4UYY0-4b7THK5Uq49w_output.mp4'}} ``` {% endcode %}
**Processing time**: \~ 2 min 6 sec. **Generated video** (1920x1080, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/video-o1-video-to-video-edit.md # o1/video-to-video/edit {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/video-o1-video-to-video-edit` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model transforms an input video according to a natural-language text prompt, altering style, visual attributes, or the overall look of the scene while preserving the original motion and structural layout of the footage. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key.\ :black\_small\_square: Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request. :digit\_four: **(Optional)** **Adjust other optional parameters if needed** Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding **API schema**, which lists all available parameters and usage notes. :digit\_five: **Run your modified code** Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step-by-step, feel free to use our [**Quickstart guide.**](https://docs.aimlapi.com/quickstart/setting-up) {% endhint %}
## API Schemas Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls. ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/video-o1-video-to-video-edit"]},"prompt":{"type":"string","maxLength":2500,"description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"image_list":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":7,"description":"Array of image URLs for multi-image-to-video generation."},"elements":{"type":"array","items":{"type":"object","properties":{"reference_image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":4,"description":"Additional reference images from different angles."},"frontal_image_url":{"type":"string","format":"uri","description":"The frontal image of the element (main view)."}},"required":["reference_image_urls","frontal_image_url"]},"maxItems":4,"description":"Elements (characters/objects) to include in the video."},"keep_audio":{"type":"boolean","default":false,"description":"Whether to keep the original audio from the video."}},"required":["model","prompt","video_url"],"title":"klingai/video-o1-video-to-video-edit"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/video-o1-video-to-video-edit", "prompt":''' Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots. ''', "video_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "klingai/video-o1-video-to-video-edit", prompt: ` Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots.`, video_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 9177d714-9f53-4ce7-829f-b81e2223c48b:klingai/video-o1-video-to-video-edit Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': '9177d714-9f53-4ce7-829f-b81e2223c48b:klingai/video-o1-video-to-video-edit', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a875051/Uce7GGCPuWbicbRrdaI4U_output.mp4'}} ``` {% endcode %}
**Processing time**: \~ 3 min 55 sec. **Generated video** (1940x1068, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/video-o1-video-to-video-reference.md # o1/video-to-video-reference {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/video-o1-video-to-video-reference` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The model that performs video-to-video editing by applying a reference style or identity to source footage, enabling appearance transfer across clips while preserving motion and structure from the original video. The model is well-suited for maintaining consistent characters, branding elements, or artistic style over multiple outputs derived from related source videos. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key.\ :black\_small\_square: Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request. :digit\_four: **(Optional)** **Adjust other optional parameters if needed** Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding **API schema**, which lists all available parameters and usage notes. :digit\_five: **Run your modified code** Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step-by-step, feel free to use our [**Quickstart guide.**](https://docs.aimlapi.com/quickstart/setting-up) {% endhint %}
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/video-o1-video-to-video-reference"]},"prompt":{"type":"string","maxLength":2500,"description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"image_list":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":4,"description":"Array of image URLs for multi-image-to-video generation."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"elements":{"type":"array","items":{"type":"object","properties":{"reference_image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":4,"description":"Additional reference images from different angles."},"frontal_image_url":{"type":"string","format":"uri","description":"The frontal image of the element (main view)."}},"required":["reference_image_urls","frontal_image_url"]},"maxItems":4,"description":"Elements (characters/objects) to include in the video."},"keep_audio":{"type":"boolean","default":false,"description":"Whether to keep the original audio from the video."}},"required":["model","prompt","video_url"],"title":"klingai/video-o1-video-to-video-reference"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/video-o1-video-to-video-reference", "prompt":''' Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots. ''', "video_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "klingai/video-o1-video-to-video-reference", prompt: ` Add a small fairy as a rider on the raccoon’s back. She must have a black-and-golden face and a cloak in the colors of a dark emerald tropical butterfly with bright blue shimmering spots.`, video_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 6a3ada23-8eb3-4e29-948a-59e5bc2074c5:klingai/video-o1-video-to-video-reference Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': '6a3ada23-8eb3-4e29-948a-59e5bc2074c5:klingai/video-o1-video-to-video-reference', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a878546/2Wt0iDavlysQHFuP3_8I5_output.mp4'}} ``` {% endcode %}
**Processing time**: \~ 3 min 23 sec. **Generated video** (1920x1080, without sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/magic/video-to-video.md # magic/video-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `magic/video-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} The model allows you to embed your custom video into the selected video template — sound included.
Supported Templates

Art Gallery

Cappadocia Balloons

Desktop Reveal

Dubai Museum

Egypt Pyramid

Las Vegas LED

New York Times Square(66)

New York Times Square(77)

Paris Eiffel Tower

Phone App

Phone Social

Rotating Сards

San Francisco Skyscrapers

Stockholm Metro

Thailand Street

Times Square Billboard

Times Square Round Screen

Tokyo Billboard

## API Schemas Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls. ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["magic/video-to-video"]},"video_url":{"type":"string","format":"uri","description":"A video (supplied via URL or Base64) that will be inserted into the selected video template as the embedded ad content."},"template":{"type":"string","enum":["Thailand Street","Times Square Billboard","New York Times Square (78)","Phone Social","Art Gallery","New York Times Square (67)","Dubai Museum","Rotating Cards","Desktop Reveal","Egypt Pyramid","Cappadocia Balloons","Times Square Round Screen","Stockholm Metro","Tokyo Billboard","San Francisco Skyscrapers","Malaysia Shop","Las Vegas LED","Phone App","Paris Eiffel Tower"],"default":"Thailand Street","description":"Video design template."}},"required":["model","video_url"],"title":"magic/video-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "magic/video-to-video", "video_url": "https://cdn.aimlapi.com/panda/pixverse%2Fmp4%2Fmedia%2Fweb%2Fori%2FSPATnCC6Dp3nA9Sie3fsU_seed186094117.mp4", "template": "Thailand Street" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["waiting", "queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "magic/video-to-video", video_url: "https://cdn.aimlapi.com/panda/pixverse%2Fmp4%2Fmedia%2Fweb%2Fori%2FSPATnCC6Dp3nA9Sie3fsU_seed186094117.mp4", template: "Thailand Street" }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: expUFhMahEWMCh8vWd-4p Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'expUFhMahEWMCh8vWd-4p', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/mule/ompr/openmagic/render_tasks/255044/e59d3497d6514bb8aaca78f8ac0870a6.mp4?response-content-disposition=attachment%3B%20filename%3De59d3497d6514bb8aaca78f8ac0870a6.mp4&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=FUQDW4Z92RG9JPURIVP1%2F20251231%2Ffsn1%2Fs3%2Faws4_request&X-Amz-Date=20251231T172055Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=f9c2d781a10d3e5b23e32eb32476f4c7834862fef6275b6c9d2bcb0ce32111a0'}} ``` {% endcode %}
**Processing time**: \~ 2 min 37 sec. **Generated video** (608x1080, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/video-v2-6-pro-text-to-video.md # v2.6-pro/text-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/video-v2-6-pro-text-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A high-end AI system for producing cinematic, high-fidelity videos from text prompts, featuring native audio generation and smooth, natural motion. The flagship Kling model as of early December 2025. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/video-v2-6-pro-text-to-video"]},"prompt":{"type":"string","maxLength":2500,"description":"The text description of the scene, subject, or action to generate in the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"cfg_scale":{"type":"number","minimum":0,"maximum":1,"description":"The CFG (Classifier Free Guidance) scale is a measure of how close you want the model to stick to your prompt."},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt"],"title":"klingai/video-v2-6-pro-text-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/video-v2-6-pro-text-to-video", "prompt": "A cheerful white raccoon running through a sequoia forest", "aspect_ratio": "16:9", "duration": "5" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: bfb9eca8-178f-41ca-a1f1-6e4551157a0b:klingai/video-v2-6-pro-text-to-video Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: completed Processing complete:\n {'id': 'bfb9eca8-178f-41ca-a1f1-6e4551157a0b:klingai/video-v2-6-pro-text-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a84eb06/XtWjnJNjQLBMwJbBz4WcF_output.mp4'}} ``` {% endcode %}
**Processing time**: \~ 2 min 32 sec. **Generated video** (1920x1080, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/video-v2.6-pro-image-to-video.md # v2.6-pro/image-to-video {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/video-v2-6-pro-image-to-video` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A high-end AI system for producing cinematic, high-fidelity videos from text prompts and reference images, featuring native audio generation and smooth, natural motion. The flagship Kling model as of early December 2025. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and an example with both endpoint calls. ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/video-v2-6-pro-image-to-video"]},"prompt":{"type":"string","maxLength":2500,"description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"5"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"generate_audio":{"type":"boolean","default":true,"description":"Whether to generate audio for the video."}},"required":["model","prompt","image_url"],"title":"klingai/video-v2-6-pro-image-to-video"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/kling/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/video-v2-6-pro-image-to-video", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "duration": "5", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/kling/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: 27f25c85-e9db-44e9-909a-0c3ceb35cdd9:klingai/video-v2-6-pro-image-to-video Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: completed Processing complete:\n {'id': '27f25c85-e9db-44e9-909a-0c3ceb35cdd9:klingai/video-v2-6-pro-image-to-video', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/flamingo/files/b/0a84eaeb/4JST2nBT5v7zbzIy6RlmW_output.mp4'}} ``` {% endcode %}
**Processing time**: \~ 1 min 50 sec. **Generated video** (1180x1756, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/kling-ai/video-v2.6-pro-motion-control.md # v2.6-pro/motion-control {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `klingai/video-v2-6-pro-motion-control` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A next-generation cinematic video generation model developed by KlingAI. It focuses on transferring motion from reference videos to arbitrary target characters, producing smooth, realistic movement, detailed visuals, and native audio when enabled. ## How to Make a Call
Step-by-Step Instructions :digit\_one: **Setup You Can’t Skip** :black\_small\_square: [**Create an Account**](https://aimlapi.com/app/sign-up): Visit the AI/ML API website and create an account (if you don’t have one yet).\ :black\_small\_square: [**Generate an API Key**](https://aimlapi.com/app/keys): After logging in, navigate to your account dashboard and generate your API key. Ensure the key is enabled on the UI. :digit\_two: **Copy the code example** At the bottom of this page, you'll find a code example that shows how to structure the request. Choose the code snippet in your preferred programming language and copy it into your development environment. :digit\_three: **Modify the code example** :black\_small\_square: Replace `` with your actual AI/ML API key.\ :black\_small\_square: Adjust the input field used by this model (for example, prompt, input text, instructions, media source, or other model-specific input) to match your request. :digit\_four: **(Optional)** **Adjust other optional parameters if needed** Only the required parameters shown in the example are needed to run the request, but you can include optional parameters to fine-tune behavior. Below, you can find the corresponding **API schema**, which lists all available parameters and usage notes. :digit\_five: **Run your modified code** Run your modified code inside your development environment. Response time depends on many factors, but for simple requests it rarely exceeds a few seconds. {% hint style="success" %} If you need a more detailed walkthrough for setting up your development environment and making a request step-by-step, feel free to use our [**Quickstart guide.**](https://docs.aimlapi.com/quickstart/setting-up) {% endhint %}
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["klingai/video-v2-6-pro-motion-control"]},"prompt":{"type":"string","description":"Optional instructions that define the background elements, including their appearance, timing in the frame, and behavior, and can also subtly adjust the character’s animation."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that serves as the character reference for animation. The image must contain exactly one clearly visible character, who will be animated using the motion from the reference video provided in the video_url parameter. For optimal results, be sure the character’s proportions in the image match those in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. The character’s movements from this video will be applied to the character from the image provided in the image_url parameter. For best results, use a video with a single clearly visible character. If the video contains two or more characters, the motion of the character occupying the largest portion of the frame will be used for generation."},"character_orientation":{"type":"string","enum":["image","video"],"default":"image","description":"Generate the orientation of the character in the video, which can be selected to match the image or the video:\n- image: has the same orientation as the person in the picture; At this time, the reference video duration should not exceed 10 seconds;\n- video: consistent with the orientation of the characters in the video; At this time, the reference video duration should not exceed 30 seconds;"},"keep_audio":{"type":"boolean","default":true,"description":"Whether to keep the original audio from the video."}},"required":["model","image_url","video_url"],"title":"klingai/video-v2-6-pro-motion-control"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example 1. Provide the URL of the image containing the character you want to animate. 2. Provide the URL of the video where another character performs the movements you want to transfer to the animated character. 3. If needed, describe minor background details or additional objects in the frame using the `prompt` parameter. Example: `"A brightly colored parrot flies in from the left, briefly circles above the character once, and then hurries off to the right."` 4. Set the `character_orientation` parameter to `image` or `video`, depending on whether you want to use the character’s orientation from the image reference or from the video reference. 5. By default, the model uses the audio track from the reference video. You can disable this behavior by setting the `keep_audio` parameter to `false`.
Input Reference Guidelines * Ensure the character’s entire body and head are clearly visible and not obstructed in both the image and the motion reference. * Keep the character’s proportions consistent between the image and the reference video. Avoid pairing a full-body motion reference with a half-body or cropped image. * Upload a motion reference featuring a single character whenever possible.\ If the motion reference contains two or more characters, the motion of the character occupying the largest portion of the frame will be used for generation. * Real human actions are recommended. Certain stylised characters, humanoid animals, and characters with partial humanoid body proportions are supported and can be recognised. * Avoid cuts, camera movements, and rapid scene changes in the motion reference video. * Avoid overly fast or complex motions. Steady, moderate movements generally produce better results. * The short edge of the input media must be at least 300–340 px (depending on the input type), and the long edge must not exceed 3850–65536 px. * The supported duration of the motion reference video ranges from 3 to 30 seconds.\ The generated video length will generally match the duration of the uploaded reference video. * If the motion is too complex or fast-paced, the generated output may be shorter than the original video, as the model can only extract valid continuous action segments.
The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "klingai/video-v2-6-pro-motion-control", "image_url": "https://cdn.aimlapi.com/flamingo/files/b/0a875302/8NaxQrQxDNHppHtqcchMm.png", "video_url": "https://cdn.aimlapi.com/flamingo/files/b/0a8752bc/2xrNS217ngQ3wzXqA7LXr_output.mp4" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "klingai/video-v2-6-pro-motion-control", image_url: "https://cdn.aimlapi.com/flamingo/files/b/0a875302/8NaxQrQxDNHppHtqcchMm.png", video_url: "https://cdn.aimlapi.com/flamingo/files/b/0a8752bc/2xrNS217ngQ3wzXqA7LXr_output.mp4" }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["waiting", "queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'NVOeetazzu2Xlca6ghTMx', 'status': 'queued'} Generation ID: NVOeetazzu2Xlca6ghTMx Status: queued. Checking again in 15 seconds. Status: queued. Checking again in 15 seconds. Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'NVOeetazzu2Xlca6ghTMx', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/kangaroo/bs2/upload-ylab-stunt-sgp/muse/784256485483880450/VIDEO/20260109/fcf822536e099526fc3cb8e0c72b6d56-34f8180e-d688-4660-85e9-bac5a1bacae2.mp4?cacheKey=ChtzZWN1cml0eS5rbGluZy5tZXRhX2VuY3J5cHQSsAHC22ZHtdl8eKw97-KZAzBvy2RP67q-Vk1qJ97HX6jnXG4aaDLv9nXnEJTxrTer-3ItWMsuf1gHAmAp5-IgNiZH-4Gbzz4Nr6kVKj_lDlZ8iO5qFZc1B-ANepHaG4gMRUmNtr0juZVHxV_1BMzn7wh81dK_TZg7I-UtGfATmByvx2ttbyHG8zBEggZMVqeXVqwO-_Doy2htfyzZFn304rHeGHC4L8DmVoRgJm-5h_LINBoSK9DDR3Zts0AO39BBzVqoebvyIiDSG0tFxUnEmcpQh23QNXer3hT6YcDJEoZRcn8_q3CCPigFMAE&x-kcdn-pid=112781&ksSecret=1ef8bfd0ebad4b4810a97cd91fa51924&ksTime=69884f0c'}} ``` {% endcode %}
**Processing time**: \~ 5 min 56 sec.
Image and Video References
**Generated video** (1936x1072, without sound, the character’s orientation matches the orientation from the image reference): {% embed url="" %} `"character_orientation":"image"` {% endembed %} **Generated video** (1936x1072, without sound, the character’s orientation matches the orientation from the video reference): {% embed url="" %} `"character_orientation":"video"` {% endembed %} --- # Source: https://docs.aimlapi.com/api-references/vision-models.md # Vision Models ## Overview Our API enables you to use machine learning models for tasks that require visual capabilities. These models are referred to as *vision models*. Within our API, we offer two categories of vision models: **OCR** and **OFR**. ### OCR: Optical Character Recognition With OCR technology, you can analyze any document and extract text as well as other characters and symbols. This allows you to detect: * Text * Paragraph blocks * Handwriting * Text inside PDF/TIFF files {% content-ref url="vision-models/ocr-optical-character-recognition" %} [ocr-optical-character-recognition](https://docs.aimlapi.com/api-references/vision-models/ocr-optical-character-recognition) {% endcontent-ref %} ### OFR: Optical Feature Recognition In contrast to OCR, OFR allows you to analyze not just documents but also images. You can filter exactly what you want to find in the image by the features they include: * Crop hints * Faces * Image properties * Labels * Landmarks * Logos * Multiple objects * Explicit content * Web entities and pages * And many more {% content-ref url="vision-models/ofr-optical-feature-recognition" %} [ofr-optical-feature-recognition](https://docs.aimlapi.com/api-references/vision-models/ofr-optical-feature-recognition) {% endcontent-ref %} --- # Source: https://docs.aimlapi.com/api-references/speech-models/voice-chat.md # Voice Chat ## Overview Voice chat models are designed to enable voice-based interactions with AI systems. Unlike traditional text-only assistants, these models can generate natural-sounding speech as responses, creating a more immersive and human-like conversational experience. Some models accept text input and respond with voice, while others can process both speech and text, allowing users to talk directly to the model or type messages depending on the use case. Depending on the model, you may have access to settings for bitrate, output audio formats (often including lossless options), stream vs. non-stream modes, as well as a variety of voices and ways to customize or modify them. ## All Available Voice Chat Models
Model IDDeveloperContextModel Card
elevenlabs/v3_alphaElevenLabsEleven v3 Alpha
minimax/speech-2.5-turbo-previewMiniMaxMiniMax Speech 2.5 Turbo
minimax/speech-2.5-hd-previewMiniMaxMiniMax Speech 2.5 HD
minimax/speech-2.6-turboMiniMaxMiniMax Speech 2.6 Turbo
minimax/speech-2.6-hdMiniMaxMiniMax Speech 2.6 HD
*** Several models that were originally listed in our Text Models (LLM) section should also be included in this category: * [gpt-4o-audio-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-audio-preview) * [gpt-4o-mini-audio-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini-audio-preview) --- # Source: https://docs.aimlapi.com/api-references/embedding-models/anthropic/voyage-2.md # voyage-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `voyage-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A general-purpose embedding model that delivers state-of-the-art performance across multiple domains while maintaining high efficiency. It's optimized for a balance between cost, latency, and retrieval quality. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [voyage-2.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-23bbcb539d99f68452ffbbe3fadc8e45e4a96dd8%2Fvoyage-2.json?alt=media\&token=342160ba-838f-4fcc-8c14-8ac53a059bf1) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="voyage-2"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "voyage-2", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.001198187, 0.049298365, 0.028401304, 0.033417277, -0.055729233, -0.084182143, 0.009030926, 0.020212591, -0.044087704, 0.020462111, 0.003028488, 0.05307113, -0.023698248, -0.006883118, -0.05646202, 0.000945727, 0.015159524, -0.002781284, -0.014497086, -0.043287832, 0.056263112, -0.014664626, -0.012783666, 0.021798261, 0.020149186, -0.00040017, 0.049402509, 0.051637456, -0.012695557, 0.041487299, 0.003978217, 0.011255085, 0.006779575, 0.020875007, 0.057707101, 0.014465009, -0.0040899, -0.05505462, 0.031611055, -0.012716668, -0.051720962, -0.014375405, -0.011334603, -0.015697889, -0.008672391, -0.06338267, 0.010669489, 0.022968747, -0.016906437, 0.01490831, 0.019405564, -0.05475156, 0.040595006, 0.04169935, -0.044304911, 0.018199893, -0.003556291, -0.017475201, -0.012739422, -0.016247479, -0.039491609, -0.035999682, 0.050156876, -0.048853625, -0.000868453, -0.023360942, -0.030231863, -0.066703193, 0.03264508, 0.012832661, -0.010381677, -0.03484907, 0.029493447, 0.006854032, -0.022368258, -0.013598492, -0.008583372, -0.024105456, 0.052614368, 0.0505866, 0.038397592, -0.041998185, -0.028611477, -0.002206625, 0.023181263, -0.003260826, 0.001625487, 0.008019842, -0.030142259, 0.013663144, -0.020966606, 0.018265339, -0.028291998, -0.02990957, 0.011609118, -0.001845041, -0.06387338, 0.008490836, -0.013855136, -0.015574744, -0.047282029, 0.037959184, -0.03569556, 0.062524155, -0.027133711, 0.01506374, -0.037750188, 0.005579574, -0.012883563, 0.008325232, -0.011427139, 0.026454872, 0.025160067, -0.0158771, 0.016693274, 0.04076026, 0.034705047, 0.056452639, -0.03413317, 0.015282825, -0.012572439, -0.000490361, 0.0163908, -0.012331745, -0.052867524, 0.029283743, 0.034171171, 0.007781742, -0.029384607, 0.057016537, -0.007166677, -0.024800241, 0.001211888, -0.047918174, 0.002634328, -0.037601471, 0.004386539, 0.056798857, 0.018685797, 0.006914608, 0.031465914, 0.012414045, -0.009234188, 0.03577866, -0.044168863, 0.029474214, 0.008096647, 0.055801481, -0.045081794, -0.015465318, -0.030047845, -0.014134975, -0.029257005, -0.028774267, 0.010630493, 0.015519386, -0.073758952, -0.027448498, -0.041372832, -0.009664314, 0.015075117, 0.030134693, -0.001441059, 0.035603058, 0.020678909, 0.008900155, 0.038376361, 0.06279625, 0.074670948, 0.003063203, 0.009223592, -0.025492212, -0.032606144, -0.042533465, 0.000810867, -0.017547095, 0.01060733, 0.011103672, -0.0412274, -0.017947676, 0.042320359, -0.024170021, 0.013410355, 0.033199597, -0.068362042, -0.013752396, -0.024007406, 0.037092932, -0.042873118, 0.01801658, 0.111038134, -0.037622288, 0.045559373, 0.013852498, -0.057670511, 0.020490319, 0.012189714, 0.004455179, 0.074316278, -0.014086301, -0.014457034, -0.009707415, 0.007642205, -0.027854297, 0.03253765, -0.010758918, 0.008705751, 0.035408273, -0.032618344, -0.080561377, -0.051747233, -0.028979629, -0.025693705, 0.04295193, -0.043374151, -0.03633628, 0.016753441, -0.053659424, 0.055474028, 0.037140079, 0.009594706, 0.010742909, 0.007659789, 0.024168789, -0.036221754, -0.014885176, 0.009512373, -0.012258912, 0.034117222, 0.034771662, 0.021216067, -0.024904449, -0.028171899, -0.00211362, 0.036748119, 0.017063066, -0.026057636, -0.020205554, -0.006556953, -0.008266491, -0.042868428, -0.03359367, 0.023219263, 0.030353839, -0.001692925, 0.021424126, 0.052400269, 0.062402181, 0.023540854, -0.028841354, 0.009650298, 0.032100182, -0.061093774, -0.007988648, -0.046846673, -0.037809297, -0.041549794, -0.033368487, 0.026495688, 0.064340636, 0.006349597, -0.012879575, 0.005167323, 0.015456961, 0.018827124, 0.011481295, 0.049941074, 0.003177525, -0.002744545, -0.053687572, -0.031027278, 0.022542305, -0.046837293, 0.019902494, 0.009571601, -0.034680944, 0.030735712, -0.014235837, 0.026802033, 0.042081222, 0.017979285, 0.033056982, 0.034685813, -0.028054148, 0.02453999, 0.044428762, 0.016298991, 0.023881679, 0.037066542, -0.001603614, -0.025365546, -0.01015834, 0.017794563, -0.029792756, -0.031316496, -0.000106845, -0.027813951, -0.014794107, 0.035539635, 0.002560381, -0.039581683, 0.030089246, 0.064851053, 0.021562288, 0.03165615, 0.000671329, -0.056645922, -0.026960129, 0.0125798, 0.015448488, -0.045676656, -0.014852514, -0.000891089, -0.009705304, 0.018401738, 0.004213195, -0.00807322, -0.030693488, 0.025196658, -0.050657436, 0.049876332, 0.02019453, -0.003393473, -0.035453312, -0.030501613, -0.020382592, 0.001321782, -0.014364615, 0.030947877, 0.001873482, -0.020154418, 0.015497336, 0.01416195, 0.021376979, -0.046149544, 0.041279003, 0.011299828, -0.028830094, -0.04904503, 0.056102667, -0.042783044, 0.035157289, -0.001227765, 0.009589897, -0.029410413, 0.008694381, -0.028968956, -0.036708008, -0.017044771, -0.025001032, -0.05728535, 0.009724069, 0.056744438, 0.023171647, 0.009714453, 0.021230141, -0.003341692, 0.026751366, -0.037849642, 0.051792271, 0.032075085, -0.047360372, 0.044511329, 0.015381673, -0.032888092, -0.022994079, 0.031133534, 0.005668767, 0.028056258, -0.021261925, 0.020336676, -0.004678897, -0.005449741, -0.004163554, -0.016482633, -0.019186245, -0.017180821, -0.00584695, -0.01140462, 0.055605382, -0.030624058, 0.036248025, -0.024241505, 0.002355047, 0.017398732, -0.036150444, -0.006220702, -0.046372849, 0.071542069, 0.008601004, -0.012999673, -0.005422472, 0.020756902, 0.024100296, -0.025406362, -0.028843816, 0.012709749, 0.027977442, 0.046310924, -0.043083761, -0.049864605, 0.006494969, 0.063335761, -0.04888881, -0.015992271, 0.028869968, 0.041111995, 0.022951858, -0.005264258, 0.019293355, -0.026273319, 0.044486932, -0.045276251, -0.028733453, 0.02591021, 0.000789903, -0.033494212, -0.068667918, -0.006247091, -0.016508318, 0.067724027, 0.010342973, 0.033561062, -0.006759091, 0.035951063, 0.0029429, 0.004145493, 0.020242205, -0.013639805, 0.020355793, -0.006030703, 0.022007728, -0.060298592, 0.018501194, -0.04678164, -0.059775036, 0.052164763, 0.049945764, -0.070929147, 0.033176377, -0.01071652, -0.046087615, 0.000207357, -0.026349554, 0.008776128, 0.017910557, 0.008822279, -0.023125907, 0.020568194, -0.015138098, 0.054378603, -0.020249769, -0.001507119, -0.023981135, 0.016336849, 0.046181444, 0.021443479, -0.056840144, 0.038629342, -0.050983489, -0.021071339, -0.00819669, 0.018861312, 0.001533647, -0.028306542, 0.035877876, 0.029374287, -0.012802461, 0.006991253, -0.050845794, 0.020390628, 0.062131021, 0.046376601, -0.009326362, -0.018890018, -0.045002982, -0.010183878, 0.014379334, 0.006753109, 0.00150662, -0.01997697, -0.000915337, -0.051079661, 0.004644298, 0.033421028, -0.033661224, -0.020539341, 0.030257197, 0.023084564, 0.031480696, -0.023011556, -0.019763367, 0.00390647, -0.020147938, -0.044338778, -0.048264861, -0.020901278, 0.009093144, 0.000939571, 0.023906544, -0.037469644, -0.032407936, 0.004809305, -0.001218925, 0.008487083, -0.002607206, -0.005626663, 0.024581626, 0.048285972, 0.004967473, -0.011110298, -0.047326129, 0.008415775, 0.07554283, -0.036838192, -0.003563299, 0.005138149, 0.00033197, -1.642e-05, -0.017634822, -0.014135562, -0.005822577, -0.042806499, 0.011845444, -0.049168881, -0.049296487, 0.000836464, 0.032443121, 0.026292086, -0.002519361, 0.026468949, -0.034626935, 0.005885272, 0.041636012, -0.002305085, 0.039814372, -0.012699896, -0.033670604, -0.01011691, 0.03670308, -0.035867088, -0.032834142, 0.044672709, -0.030727267, 0.02278121, 0.043229658, -0.043266486, -0.007680776, 0.024949661, -0.007461349, 0.1018769, 0.02559589, 0.024503808, -0.007787137, 0.040750761, 0.024837773, 0.026692722, 0.021341208, -0.057050318, -0.067151688, -0.003955728, 0.037419621, 0.007050972, -0.019158214, -0.038086083, 0.014505999, -0.028884513, -0.01822646, -0.010033023, 0.002201113, 0.010153737, 0.01152642, 0.016746109, -0.030719763, -0.001128294, -0.025997939, 0.042394605, 0.036704957, 0.041804433, 0.010647382, -0.022103431, -0.011162548, 0.062351517, -0.017835846, -0.033536434, -0.019252276, -0.02370036, 0.012110666, 0.052288614, 0.046012554, -0.053005449, 0.018220887, -0.009672582, 0.015586003, -0.003894624, -0.026198374, 0.018198017, -0.034109715, 0.003874041, -0.020770391, 0.016139345, 0.077196755, -0.010634129, 0.024236344, -0.015169119, 0.007839225, -0.032728001, -0.021668486, 0.02657339, -0.011413066, -0.028288949, -0.069546141, 0.007378625, 0.061276264, 0.001069004, -0.001435547, 0.013964181, 0.004753841, -0.020541921, 0.019881148, -0.009817544, 0.065479696, 0.00187055, 0.025630606, -0.024382245, 0.047288597, -0.001380908, -0.026860792, -0.01333956, -0.066286601, 0.00460583, 0.018205876, 0.002730852, -0.04935278, 0.016408391, -0.036551084, -0.023739342, 0.003795344, 0.011441624, 0.047080301, -0.022063555, 0.021539534, 0.016991876, 0.049174514, -0.019283533, 0.00505608, 0.034415122, 0.052903183, -0.036925919, 0.006959001, -0.001904914, 0.013772085, 0.020490082, 0.001032737, 0.045907937, 0.008717896, 0.035887729, 0.014888461, -0.014279115, -0.030578669, -0.021089165, -0.038909648, -0.007627324, 0.037595373, -0.042227592, 0.026331492, 0.008276765, -0.017613595, 0.033929568, -0.013962802, -0.07381431, -0.012064852, 0.018816451, -0.034075938, 0.031557865, -0.00180839, 0.031200154, -0.010580256, -0.002180412, -0.025208388, 0.025697224, -0.069080763, -0.035107091, -0.002558651, -0.008835532, 0.019562021, 0.029720509, -0.025191968, -0.016871661, -0.050041467, 0.002145755, 0.014632021, 0.029755928, 0.058864921, 0.046310924, -0.031594224, -0.044469107, -0.004312387, -0.084256269, 0.008796799, 0.010890921, 0.032170318, 0.07207007, -0.007934738, 0.014277004, -0.007730782, -0.026402801, -0.054509487, 0.006831118, -0.021223808, 0.008280489, -0.037382387, -0.028521404, -0.005341547, -0.018981585, -0.030643761, 0.019130725, 0.01587452, -0.023169771, 0.001345356, -0.006646352, 0.01316214, 0.035917755, -0.003329847, 0.01666313, 0.028536886, 0.032537185, -0.013834091, -0.021125523, 0.023532176, 0.013589021, -0.002929852, 0.023386275, 0.026110062, -0.020951474, -0.02376557, -0.019592045, 0.022843722, 0.011371606, 0.024481613, -0.02173868, 0.03051991, -0.031910889, 0.02027097, -0.014543589, 0.010243048, -0.020591535, -0.019456349, -0.052799586, 0.023783132, -0.021841655, 0.030468306, -0.008422167, -0.044297405, 0.003326152, -0.007173145, -0.020910075, 0.007811004, -0.00558192, -0.05133862, 0.047784001, -0.013194891, -0.018419446, 0.007394414, 0.00563455, -0.009235966, 0.046325937, 0.072061628, -0.010620466, -0.00684421, 0.037040859, 0.003614904, 0.006193903, -0.036946565, -0.042912524, 0.011873901, 0.013172841, -0.018089531, -0.061097525, 0.01953575, 0.002754902, -0.008945778, 0.01756246, 0.037157673, 0.010929624, -0.024050567, 0.008153808, 0.000359414, -0.032816783, 0.006732204, 0.015253856, -0.02360067, -0.01540996, -0.001822347, -0.002970901, 0.031182766, 0.039972, -0.02613985, -0.016019978, -0.008500776, 0.005331813, -0.006966447, 0.063030824, -0.015405211, 0.067567334, 0.029796975, 0.012576163, 0.007712076, -0.014417158, -0.021775039, -0.079656892, -0.005570426, -0.04779432, -0.010504267, 0.027108375, -0.047799483, -0.031130252, -0.001412559, 0.004513233, 0.014160542, -0.002898244, -0.022762798, -0.005124104, 0.016394436, 0.047179759, -0.033995718, 0.028233824, 0.014835213, 0.060594611, -0.032285728, 0.013866571, 0.017348887, -0.001160977, 0.030316073, -0.019644566, -0.022284752, 0.003982704, -0.009029635, -0.02066718, 0.054605901, -0.057648927, -0.029727075, -0.006310611, 0.054903563, 0.018591503, 0.0281658, -0.019154051, -0.004671508, -0.005608073, 0.025439203, 0.021644736, 0.012108027, -0.027437473, -0.060576085, 0.001439886, -0.033617124, 0.018243626, -0.045216907, -0.038642127, 0.007292671, -0.031046277, 0.013207615, -0.002650014, 0.079817332, -0.002122357, -0.023477288, 0.038829193, 0.038136754, -0.001622613, -0.008799115, -0.045834284, 0.066615, 0.007536034, 0.010009361, -0.005835293, -0.044623919, 0.027115414, 0.002192199, -0.012442739, -0.020488206, 0.027835766, 0.054031912, 0.005859609, 0.015641594, 0.012093074, -0.000808697, 0.029878605, -0.0101769, 0.043032624, 0.020055901, 0.014631434, 0.013278337, -0.02434741, -0.044821426, 0.05095534, -0.045242239, -0.03200331, -0.071131811, -0.014987388, 0.051412273, 0.039191362, -0.015005245, 0.058662258, 0.097095497, -0.004294912, -0.023883555, 0.059157658, 0.044377156, -0.020791268, 0.008307171, 0.001214351, -0.047954295, 0.029213376, -0.001246076, 0.021353288, -0.023426151, -0.042853884, -0.001466099, -0.009750165, 0.029869221, 0.0104385, -0.013709236, -0.049293905, -0.027357016, 0.000779464, 0.023469752, -0.016103925, 0.023613336, -0.011284523, -0.009527368, 0.0199256, -0.009235673, 0.052150693, 0.024881754, -0.006957476, -0.01268418, 0.020077949, 0.000591518, -0.005343834, -0.038358182, -0.020031506, 0.00576881, 0.065021113, -0.000830483, 0.020983053, -0.019768322, -0.005942858, -0.014661136, 0.047065292, -0.046664651, 0.01606355, 0.003663107, 0.019157277, 0.001670875, 0.009432152, -0.019699829, 0.000812069, -0.020781415, 0.043960098, -0.036490213, -0.001900142, 0.069007576, -0.029250436, 0.022790007, 0.030682934, 0.033113278, -0.022603409, -0.05646484, -0.055204391, 0.053308509, -0.007725505, -0.03162894, 0.043842345, -0.000238001, -0.024620095, -0.054692451, -0.010728138, 0.01129768, -0.00904629, -0.000388691, -0.081198901, 0.010880013, 0.012378013, 0.015225504, 0.028770156, 0.000204425, 0.001689113, -0.01323201, 0.04179411, 0.014519195, -0.03950005, 0.00101708, 0.013799838, 0.028874896, 0.054830141, 0.021270955, -0.025956186, 0.02234723, -0.024981562, 0.017288486, -0.02723592, 0.014630379, 0.009001604, -0.000573273, 0.028385591, -0.001617805, 0.051975001, -0.029455917, -0.029072165, -0.044029061, 0.017682206, -0.054965485, 0.031072548, -0.018017517, 0.002065328, 0.005563154, 0.012502041, -0.021870976, 0.000447172, 0.050289638, 0.002705959, -0.027021587, -0.066585906, -0.060553797], index=0, object='embedding')], model='voyage-2', object='list', usage=Usage(prompt_tokens=None, total_tokens=6)) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/anthropic/voyage-code-2.md # voyage-code-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `voyage-code-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This embedding model is designed for semantic retrieval of code and related text from both natural language and code-based queries. In a comprehensive evaluation across 11 code retrieval tasks—sourced from popular datasets like HumanEval and MBPP—it achieved a significant 14.52% improvement in recall over competitors, including OpenAI and Cohere. Additionally, it demonstrated consistent gains, averaging 3.03%, across various general-purpose text datasets. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [voyage-code-2.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-5808a74e4d0eab3f7f8308ab36d84298a9d5a046%2Fvoyage-code-2.json?alt=media\&token=2836a546-a33b-4ba6-bb6e-ebc17415aaf5) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="voyage-code-2"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "voyage-code-2", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This Python example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[-0.015611, 0.016381765, 0.015262494, 0.028116634, 0.026989678, 0.000494121, -0.025271986, -0.011951271, -0.056328308, 0.021407766, -0.012444285, 0.059624858, -0.035985306, -0.046485253, -0.000635049, -0.036895629, 0.037314452, -0.027377753, -0.03597226, 0.018330064, 0.014466659, 0.064007834, -0.017127983, -0.010960116, 0.019682391, -0.019696601, -0.020545894, -0.036266111, 0.022564862, 0.015046909, 0.005086916, -0.055712417, -0.009372988, -0.05443871, 0.010829321, 0.01392135, 0.012792529, 0.00985686, 0.038214996, -0.017488923, 0.027313927, 0.001062958, 0.0097226, -0.03439014, 0.047425389, -0.037770782, 0.028628167, 0.009835372, 0.027161121, -0.004167856, -0.00672366, 0.000454347, -0.017227914, 0.027887423, 0.007638352, -0.010299153, -0.02586738, 0.021838237, 0.046358533, 0.026485134, 0.02549421, -0.042892404, 0.035525952, -0.037090834, 0.022934681, -0.027275261, -0.031381506, 0.002224463, -0.036881655, 0.018545765, 0.016824408, 0.01838477, 0.018052051, -0.016227443, 0.037551586, -0.040155433, -0.026702231, -0.016971217, -0.030671043, 0.002195622, 0.051781811, 0.030518234, -0.04866416, 0.031219378, -0.016983096, 0.016476454, -0.006486237, -0.042233393, -0.002773252, -0.040842075, -0.005939151, 0.011872188, 0.023673562, -0.026831048, 0.017583612, -0.036388755, -0.032783795, -0.000287563, -0.027190471, -0.0390778, 0.010886581, 0.014972136, -0.015995087, -0.013510212, -0.003812624, 0.003719332, 0.00348453, 0.053938825, -0.000124156, -0.023103451, 0.028389407, 0.01323488, 0.009557214, 0.02266226, 0.001955404, -0.047263268, 0.026736386, 0.036146846, 0.025940521, -0.035005916, -0.028510768, 0.007205901, -0.008451774, 0.035909835, 0.020070933, 0.051763173, 0.001338349, 0.02156928, -0.010956738, -0.016569598, -0.006681964, -0.021342516, -0.006222259, 0.011128997, 0.055099327, 0.009063849, 0.001973457, 0.003928495, 0.036765184, 0.031590916, 0.022166915, 0.000420003, -0.010268055, -0.068100102, 0.00809546, -0.023065213, -0.038423706, -0.049879633, 0.016797153, -0.025161108, -0.007720867, 0.013069551, -0.006471504, -0.028841076, -0.075326793, 0.02133905, 0.003436457, -0.022415226, 0.002941419, 0.040881678, -0.018296639, 0.00921459, 0.03463123, 0.033101056, -0.008069638, -0.029624565, 0.018412407, -0.015460201, -0.013632738, 0.019308757, -0.017245501, 0.02154031, 0.010338636, 0.025030896, -0.02780077, -0.032005314, 0.000267355, -0.026330926, -0.003410077, -0.008583209, 0.032415751, 0.004901031, -0.028110811, 0.004195867, -0.004088773, -0.033787295, -0.016172703, -0.029049715, 0.026614645, -0.035513837, 0.033748161, 0.017315267, 0.064818926, 0.007677719, -0.010386271, -0.001057964, 0.023316467, 0.001422498, -0.039107617, 0.028060731, -0.01471736, -0.023774076, 0.008702532, 0.006201076, 0.015451756, -0.02938685, -0.007175386, -0.027763966, -0.029969895, -0.009932857, -0.039219659, 0.012605711, -0.006088872, 0.019028416, 0.04584793, -0.016598979, 0.034786601, 0.025516806, -0.059132893, -0.037109409, -0.003896716, 0.03235985, 0.017011747, -0.009764849, 0.026055971, -0.028698517, 0.017838636, 0.001355179, -0.00779378, -0.011981669, -0.022837313, 0.027087046, -0.001790017, -0.010994941, 0.020262875, 0.023161562, -0.019865714, 0.032496117, -0.023300394, -0.010266366, 0.015670486, 0.004460398, -0.010391513, -0.015774146, -0.009346084, 0.018950731, -0.02709054, -0.028998075, -0.054021753, 0.013167793, 0.011030814, 0.007862847, 0.005070377, -0.000759557, 0.000793489, -0.054297082, 0.034167916, -0.019880621, 0.02641991, 0.038295127, -0.017762044, 0.056693558, 0.03424304, -0.029434605, -0.000219647, 0.017203951, 0.016556468, -0.005071542, 0.04677207, -0.015545865, 0.000654765, -0.050357625, 0.015936911, -0.00074884, 0.027610691, 0.03909364, -0.011458663, -0.008858659, 0.015301278, 0.006836169, 0.000164921, -0.026906285, 0.019307591, -0.001797821, -0.013217874, -0.012339754, -0.043093197, 0.022559881, 0.01490074, -0.011686536, 0.02538636, 0.034338895, 0.019348122, -0.005348564, -0.003482434, 0.021459246, -0.044842567, -0.028761964, 0.018909441, -0.027968049, 0.026672415, -0.032805227, 0.011964898, 0.025785385, 0.044160988, 0.02507434, -0.005189846, 0.018182497, -0.011724213, -0.001976419, -0.019829841, -0.00734696, -0.000371858, -0.001410153, 0.027824996, -0.024330683, 0.030691365, -0.032004382, 0.036360804, -0.018323893, 0.018969597, -0.010687898, 0.035673633, 0.006694892, -0.007840834, 0.013054301, -0.000389386, -0.024205361, -0.016925793, -0.02575743, -0.000165386, 0.030299271, 0.013027214, 0.038340781, -0.009825821, 0.008132764, -0.007000391, -0.037161179, -0.010215527, 0.031265035, 0.015670953, -0.005347385, -0.039390869, 0.014860559, -0.035821315, -0.009872874, -0.028576458, -0.002587163, 0.040891401, -0.024546077, 0.021533787, 0.034094777, -0.050905496, 0.015685512, -0.033866495, -0.000772789, 0.037627056, -0.007396445, -0.040399026, -0.000197488, -0.015808154, 0.02177418, 0.011872421, 0.038658511, 0.0426194, 0.015379932, -0.040902641, -0.008455093, -0.024551915, 0.004770935, -0.016702231, 0.055900633, -0.012855888, -0.021779072, 0.005040241, 0.012406258, 0.039369438, 0.007326621, 0.00689513, -0.039187744, -0.008432061, 0.020185595, -0.011021, 0.033220321, 0.00132267, 0.020309927, 0.023763478, -0.00414794, -0.031565759, -0.023394037, 0.025752308, 0.020198582, 0.038441993, -0.007633868, 0.023998277, 0.017810496, 0.005160058, 0.027906992, 0.030801022, 0.029096726, -0.031985048, -0.030738126, -0.00627467, 0.027680574, -0.003697211, -0.022719562, 0.019191004, 0.016859522, -0.020008272, -0.026645396, -0.023417097, -0.000267239, 0.025659597, -0.048059918, 0.010213664, -0.007070855, 0.007548583, -0.036012795, -0.012392806, 0.017474597, -0.002490173, -0.021732775, -0.030866243, -0.022850109, 0.007194837, 0.022641642, 0.046713531, 0.032564715, 0.041126203, 0.04461056, -0.006376931, -0.015022743, -0.064248227, -0.024678169, -0.022263817, 0.021715712, 0.009394215, -0.057244223, -0.015287651, 0.03833985, 0.021717804, -0.016180331, -0.017088385, 0.012107222, -0.00306757, 0.007209512, 0.000661197, -0.03068828, -0.034486111, 0.022149444, -0.01720299, 0.054231863, 0.015475634, -0.043653183, 0.020307802, -0.026378913, 0.011381386, 0.002471269, -0.004868681, -0.002345242, 0.029836655, 0.01477309, 0.030104067, 0.034129251, 0.013569321, -0.033810124, 0.014772042, 0.028001098, 0.012205523, 0.007632704, 0.003523664, -0.019178078, 0.035184927, 0.03753062, 0.042594243, 0.032781933, 0.020368861, 0.008273867, 0.021707559, 0.008643657, 0.01484006, -0.015387419, 0.026205169, 0.017482458, 0.004658003, 0.008840374, -0.010825594, -0.018101435, 0.012904659, -0.02008188, 0.038989633, -0.032659408, 0.003149099, -0.051365085, -0.051896881, -0.0103992, 0.023441555, -0.017339958, -0.0056126, -0.026190698, 0.013750197, 0.033381984, 0.026715508, 0.01451348, 0.030357504, -0.043078754, 0.007957769, 0.042994898, -1.7907e-05, 9.7601e-05, 0.030064965, 0.01354428, -0.020923488, 0.006907099, 0.018497197, -0.017796518, 0.032020688, 0.005771552, 0.035817079, -0.024985239, 0.000536269, 0.031402934, 0.010627712, 0.004155976, 0.005783042, 0.013424434, -0.02892866, -0.031268761, -0.022247424, -0.025858061, 0.00743785, 0.043632913, -0.008121845, 0.038240153, -0.0066504, -0.022195799, -0.014472716, 0.015444652, 0.056213651, -0.02303531, 0.019451221, -0.001878538, 0.010490861, -0.013677478, -0.047267925, -0.006145244, 0.03423325, -0.023927465, -0.026708286, -0.020278713, 0.017376296, -0.033349369, -0.040836021, 0.000131643, -0.005370519, 0.003878663, 0.035750501, 0.036146499, -0.018382505, -0.0313394, -0.0297118, -0.031755138, -0.039810274, 0.018974256, -0.002660772, 0.019133937, 0.019708479, -0.045819979, 0.056935813, -0.047368087, -0.008553044, -0.008238053, -0.010541642, -0.009270364, -0.009780166, 0.01240818, 0.015221381, 0.011449492, -0.004644944, -0.003332421, -0.003668552, 0.008332102, -0.009887608, 0.004527135, -0.000840443, -0.018239103, -0.028126884, 0.039549269, -0.008352513, 0.023325318, 0.024531709, 0.020384176, -0.016664436, -0.053531185, 0.03832937, 0.03291098, -0.028109182, -0.030703187, 0.007889867, 0.006861967, -0.007612088, 0.002401773, 0.000229219, -0.01053113, 0.018540291, -0.045134205, 0.005439235, 0.031725321, 0.041210584, 0.068142027, -0.04373844, -0.006974651, 0.024038345, 0.02907506, -0.016794357, 0.021829618, 0.003944119, 0.020799797, 0.06064979, -0.003670648, 0.003946331, -0.072844014, -0.013736164, -0.028947761, 0.014498338, 0.017725239, -0.014983084, 0.016463643, 0.010431287, 0.029970827, 0.006815846, 0.007061843, -0.033205878, -0.018893892, 0.01319132, -0.01642218, 0.011988658, 0.003912148, -0.017554611, 0.000968865, 0.041245062, -0.027315559, 0.040810864, -0.022731557, 0.007542265, 0.04177523, 0.009034295, 0.008090428, -0.051856816, 0.028538255, 0.028897448, 0.031723224, 0.027290866, 0.030006234, -0.002491673, 0.001948736, -0.022764636, 0.008652334, 0.03118537, -0.022183001, 0.043283511, -0.00111713, -0.017360223, -0.012476343, 0.016625332, -0.036035385, -0.014988209, 0.03618563, 0.011506612, -0.015947102, 0.001922764, -0.084294006, -0.005222864, -0.020355582, 0.058243997, 0.001653283, 0.04175543, -0.016341962, 0.023160107, -0.028642146, -0.01251341, 0.014522098, -0.004431193, 0.046760585, 0.053287994, -0.031150661, -0.0014462, -0.009594921, -0.0391174, -0.04584001, 0.03600068, -0.026939364, 0.052646946, -0.009305509, 0.023651665, 0.025636302, -0.020257866, -0.021728059, -0.015569856, 0.010018519, -0.02437564, 0.005361609, 0.027821269, -0.007882501, 0.022768479, 0.03887631, 0.0074131, -0.014477491, -0.027937505, -0.017782075, -0.032050971, -0.012589843, 0.015063564, -0.004670072, -0.032362644, 0.008738464, 0.007616048, -0.008742598, -0.008126941, 0.01282677, -0.019974729, 0.044416755, 0.018706245, 0.004110466, -0.037363835, 0.001835193, -0.003515686, 0.002670948, 0.019135538, 0.022398455, 0.01745078, 0.006963208, 0.037930578, -0.010789867, 0.007281198, 0.008048629, -0.006295373, 0.022937009, -0.011130307, -0.034877449, -0.012287984, -0.022205582, -0.049219251, 0.029961511, 0.002944651, -0.014541316, 0.02640046, -0.000927329, 0.018898783, 0.014154056, 0.005410526, -0.000258329, -0.026302043, -0.032856006, 0.045477092, -0.006668744, 0.007745023, 0.014358838, -0.010156244, 0.036752138, -0.022165982, 0.028456725, -0.012960317, -0.018919632, 0.01604048, 0.02016364, -0.024941329, 0.0013182, -0.012210182, 0.007593337, -0.000710463, 0.029780284, 0.012326651, -0.037585128, 0.009976067, 0.03007332, -0.033798944, 0.015984429, 0.012692219, -0.02526593, -0.039634056, -0.025179744, -0.022155734, -0.052543521, -0.062722944, -0.002050327, 0.018734971, 0.028678484, -0.014052029, -0.037452821, 0.019475307, 0.028859012, 0.014578586, 0.001146175, 0.006506037, -0.001387922, 0.017282655, -0.018453784, -0.023963971, -0.003730572, 0.045663442, -0.022658793, -0.009229324, -0.023069087, 0.025768846, -0.009896314, 0.017938552, -0.011646354, -0.039687168, -0.018215343, -0.017861508, -0.044688828, -0.040194042, 0.045632694, -0.069371015, 0.022027617, -0.006465214, 0.018402275, 0.015563626, -0.013454656, -0.019538434, 0.010059197, -0.011015439, -0.010352844, 0.016007345, -0.030400831, 0.016225813, 0.031665225, -0.020887615, -0.00609621, -0.00903313, 0.009699306, 0.019045362, 0.014117499, -0.001653399, 0.027659375, 0.033881404, -0.035415307, -0.012575429, 0.002386923, 0.045318693, 0.019648731, -0.027894411, 0.063151546, -0.019919056, -0.005587909, -0.023773842, -0.003372487, 0.004125345, -0.035242464, 0.008068765, -0.039144419, -0.047504593, -0.032891646, 0.00299862, -0.003852923, -0.015390843, 0.017134739, 0.006639453, 0.013510912, -0.022561979, -0.002769059, -0.001198068, 0.00210565, -0.012595928, 0.014941155, -0.020060217, 0.02406257, 0.003997694, -0.012755258, 0.027232867, 0.0141077, -0.002023422, -0.022545673, 0.013797863, -0.017208988, -0.025878558, -0.006478958, 0.063711539, 0.04815216, 0.053829342, 0.024223238, -0.016742732, 0.062703371, 0.02559915, -0.008966394, -0.023493268, -0.02120115, 0.000611824, -0.002047706, 0.0226682, 0.013453784, -0.005134494, 0.026178585, 0.037016757, -0.022436425, 0.01363996, -0.037418343, -0.001696376, -0.01873149, -0.027977804, -0.016997539, -0.008062853, -0.015733147, -0.03153431, -0.001088231, 0.028254069, -0.005617798, -0.00650365, 0.018883877, -0.013767377, -0.017329738, -0.014273304, -0.01020819, -0.024503173, 0.033769123, -0.001677159, 0.01848951, 0.002620939, -0.01345687, 0.048838396, -0.039051712, -0.037484501, -0.010618104, -0.023755556, -0.043276288, -0.022812387, 0.041284196, 0.000191133, 0.044082254, 0.00988854, 0.000775162, -0.004219532, 0.029304622, -0.000159563, 0.047956958, -0.009022107, 0.009663695, 0.013498711, 0.013341753, -0.006239263, 0.0262876, 0.008061776, 0.005327804, -0.029840384, -0.018596197, -0.036637533, 0.018667594, 0.019062307, 0.013389318, 0.015211889, -0.00793925, 0.006764511, 0.01986222, -0.021661904, 0.020057306, -0.008008783, 0.045864701, 0.025284158, 0.03566758, 0.003686838, 0.015757034, -0.007217112, -0.036837861, 0.002733186, -0.010106512, 0.001044905, 0.008318242, -0.014558088, -0.013355759, 0.003240178, 0.014596032, 0.006481752, 0.028111978, 0.009407085, 0.033123419, -0.021547282, 0.018333012, -0.012182579, 0.046242468, 0.014572355, 0.005895446, -0.024843615, 0.037900526, -0.021420114, -0.022534957, 0.049734276, 0.005535614, 0.019395176, -0.003887281, 0.010220885, -0.026501905, 0.045801811, -0.036292784, -0.008702532, -0.040622182, -0.018621936, -0.029899081, -0.036115285, -0.001091667, 0.028605225, 0.011706801, -0.061634656, 2.539e-05, 0.023150848, 0.04028349, 0.009031616, -0.051419828, 0.036838792, -0.032690387, 0.019212786, 0.021576183, 0.006947805, -0.023350826, 0.015494268, -0.009872583, -0.037689485, 3.3776e-05, -0.016289581, 0.024223763, 0.011166267, 0.034005791, -0.01973387, -0.011461983, 0.004578876, 0.033777047, -0.059561502, 0.010628761, 0.012592085, -0.028173238, 0.004983928, -0.055424977, 0.023941442, 0.015258144, -0.02407841, -0.027858073, -0.022866197, -0.01072118, 0.000891806, -7.3929e-05, 8.7527e-05, 0.038084548, 0.032564133, 0.011673957, 0.014724988, 0.023894854, 0.009600103, 0.010993776, -0.032555282, -0.050410733, -0.017919976, -0.010584779, 0.019684136, 0.001148737, 0.027992537, 0.029773993, -0.035321895, 0.004375172, -0.018227717, -0.035163496, -0.021728756, 0.034177467, 0.020600634, -0.002479982, -0.015657792, 0.019945988, 0.003030882, -0.03452152, -0.023360493, -0.014217298, -0.016765997, 0.030197244, 0.063730173, -0.009890462, 0.001885406, 0.014999543, -0.017975181, 0.015541205, 0.011049449, -0.056103755, -0.021746986, -0.011690029, -0.007622119, 0.008191464, -0.055434987, -0.026299478, -0.0289841, -0.017471859, -8.3916e-05, -0.014356364, 0.024609976, 0.025466723, 0.026995735, -0.006338262, 0.001575607, -0.024756495, 0.012681999, -0.038303509, 0.018343575, -0.029486548, -0.027943794, 0.019458303, 0.009602433, 0.01613613, -0.013485755, 0.01825813, 0.017384276, -0.000316928, 0.017263787, 0.032837603, -0.009313444, 0.032886986, 0.0102118, 0.035332147, 0.025198612, -0.016384443, 0.036642659, -0.013293929, 0.013863727, 0.016344845, 0.018475069, -0.000290649, -0.007840252, 0.018346254, -0.017929759, -0.023469042, -0.04118729, 0.038131136, 0.028544776, -0.010191069, -0.040393438, -0.03620613, -0.029817089, -0.0319359, -0.022628715, -0.007936688, -0.000475428, -0.021273594, -0.000706794, -0.026177952, -0.023912091, 0.005262377, 0.008055895, -0.004182644, -0.020185538, 0.035657313, 0.035227787, 0.018733107, 0.02888347, 0.004832547, -0.008453495, -0.003766736, 0.050058998, -0.039882839, -0.021579908, 0.016212068, 0.005246013, 0.007194546, 0.022853501, 0.008869258, 0.011671642, -0.026226453, -0.017740613, 0.011699114, -0.050423194, -0.007674807, -0.010096263, 0.025419205, -0.014626921, 0.02581986, -0.001477996, 0.047581926, 0.017626006, -0.039669465, 0.004964914, -0.010106803, 0.016803266, 0.002636051, 0.009402979, 0.026830349, -0.017281257, 0.026732512, 0.044769306, -0.022035304, -0.002668662, -0.048302639, -0.01875902, 0.049695149, 0.008410485, 0.014944417, 0.021057194, -0.002655734, -0.011493313, -0.05742405, -0.027233506, -0.025073057, 0.006897956, -0.014840759, -0.006567823, 0.021026913, 0.021548696, -0.045652263, -0.040652465, -0.002003258, -0.026414275, -0.013808841, -0.018532604, 0.003003978, 0.032578111, -0.012093013, -0.015617376, -0.00637332, -0.017388644, -0.076099686, 0.024051156, 0.054509755, 0.007753249, -0.022590397, -0.017270193, 0.005862136, 0.00566705, 0.056552865, -0.018218137, 0.002630227, -0.018802524, 0.008255202, 0.010540069, -0.025897196, 0.024512608, -0.032300681, 0.018701049, 0.056824006, 0.005657848, 0.029999042, -0.00605466, 0.034501251, 0.028386846, -0.031859726, 0.020399375, 0.010903133, -0.009345181, -0.010869314, 0.02023329, -0.005056008, 0.011826298, -0.051125389, 0.017834837, -0.030273182, -0.025018549, 0.02829227, 0.002678679, 0.040275104, 0.017609002, 0.042985346, 0.048773643, -0.011546772, 0.00323696, -0.042179611, 0.00390743, 0.00747477, 0.002057897, -0.007932656, 0.035253875, 0.040193111, 0.028112909, 0.002775698, -0.013180255, 0.05293905, -0.016773218, -0.011929666, -0.017886667, -0.03832005, -0.001474327, 0.003844304, -0.020997096, 0.05371194, 0.034125056, 0.00527363, 0.030854598, -0.009823259, -0.014736343, 0.01346712, -0.003143042, 0.03621766, 0.016900169, -0.026269898, -0.04309646, -0.034129016, 0.026982691, 0.044919901, 0.04545543, 0.034214042, 0.039675053, 0.02023958, 0.029184952, 0.000920457, -0.027673122, -0.004076311, 0.024206176, -0.001077706, 0.036422297, -0.009551186, 0.027132703, -0.03030486, -0.009744526, 0.009027307, -0.002761124, -0.013965347, -0.023005495, 0.03726647, -0.009159151, -0.012412722, 0.046041735, 0.002539789, -0.04388845, 0.013942474, 0.012532802, -0.028174404, 0.001187813, 0.034147058, -0.011002627, -0.053223703, 0.000261648, -0.008412198, -0.026623497, -0.014190278, 0.013136841, -0.020314703, 0.011970721, -0.014818396, 0.004021381, 0.038948283, -0.009349752, -0.033922866, -0.046694897, 0.033164416, -0.012412606, -0.006987055, 0.005013802, -0.025574109, 0.037253805, -0.025639564, 0.010128132, 0.006705403, 0.028493064, 0.036673874, -0.01876927, 0.011673374, -0.00912504, 0.043242745, -0.022041215, -0.013873131, 0.004001421, 0.014715438, 0.009864256, -0.005650744, 0.005212543, 0.009848242, -0.004596463, 0.014585982, 0.001060599, 0.019630212, -0.052668378, -0.007540052, 0.01891998, 0.002873386, -0.022146182, -0.008001561, 0.018102599, 0.012984353, -0.012753745, 0.047343399, -0.019010827, -0.012095343, -0.028844832, 0.027450385, 0.036025371, -0.026323007, 0.038918469, -0.024301914, 0.024433874, 0.015063681, 0.008245798, -0.017457766, 0.033603974, -0.001605996, -0.04229841, 0.049860653, -0.012155674, -0.030027665, -0.040266719, -0.044764303, -0.019847078, 0.042488955, 0.039395526, 0.024377037, -0.015419845, -0.042060345, -0.039582811, -0.031520102, 0.002762974, 0.003738492, 0.006927481, 0.012872193, -0.028545475, -0.005230406, 0.006820213, 0.020259146, 0.0190474, -0.018458763, 0.032726493, 0.009633414, -0.024924967, -0.019173885, 0.000709415, 0.019329721, 0.037488226, 0.006577199, -0.00800983, -0.022320187, 0.013312099, 0.037994169, 0.048728917, 0.003272542, 0.008341069, -0.03505262, -0.011916213, -0.009421906, 0.003632912, -0.053927179, 0.058800019, -0.016113304, 0.010717394, 0.004583652, 0.005885954, -0.006185091, -0.004404172, -0.041753337, 0.017930575, -0.012142047, -0.01298511, 0.007763761, -0.028712956, -0.034760512, -0.010070872, -0.069825709, -0.019545421, -0.011839343, 0.022942251, -0.019640811, 0.004966545, -0.000883216, 0.019450616, -0.010295076, -0.010681492, 0.023387048, -0.023490474, -0.036058448, -0.018022003, 0.012867885, -0.015124739, 0.006681992, -0.03926345, 0.0044567, 0.029135974, 0.004535754, -0.010490454, 0.04586377, -0.002237143, 0.028558986, 0.010980673, -0.005637721, 0.006188716, -0.000773065, -0.003796042, -0.010051538, -0.014218376, -0.023889147, -0.027194895, 0.021702843, -0.000127476, -0.023174141, -0.027701072, -0.028250808, 0.051525578, 0.021920698, 0.044554655, -0.037631717, 0.006336661, -0.017803447, 0.024745546, -0.002253915, 0.012576362, -0.012260497, 0.008785051, 0.035455137, -0.01433633, -0.035767276, -0.035517562, 0.028417127, 0.015258068, 0.003360724, -0.016561128, 0.029584149, 0.009740099, 0.010442876, 3.6215e-05, 0.047732871, -0.050517883, -0.025515407, -0.012026627, 0.017299311, 0.042422917, 0.00455931, -0.036336109, -0.036769845, -0.011859143, 0.029975951, -0.004758006, 0.023865504, -0.005783548, 0.038912646, 0.019545887, -0.029897219, 0.054783925, -0.022606848, 0.013859833, 0.004012602, -0.004110699, 0.010832582, -0.004449653, 0.018642087, 0.01201428, -0.016844381, 0.021052158, -0.030187925], index=0, object='embedding')], model='voyage-code-2', object='list', usage=Usage(prompt_tokens=None, total_tokens=6)) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/anthropic/voyage-finance-2.md # voyage-finance-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `voyage-finance-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A finance domain-specific embedding model that delivers superior retrieval quality, outperforming competing models on financial datasets—achieving an average 7% gain over OpenAI (the next best model) and 12% over Cohere. It supports a 32K context length, significantly larger than other evaluated alternatives. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [voyage-finance-2.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-303497050a55011a6168a5acf97c270825de27ef%2Fvoyage-finance-2.json?alt=media\&token=e7b34fd9-f5d3-4100-8d73-7886a100dfe6) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="voyage-finance-2"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "voyage-finance-2", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.05998329073190689, 0.0020094653591513634, -0.0009767906740307808, 0.00030224627698771656, -0.011362964287400246, -0.020030111074447632, -0.054072171449661255, 0.04061434045433998, 0.005987467244267464, -0.0023644473403692245, -0.003415028564631939, -0.02976968139410019, 0.006293648853898048, -0.0304449163377285, 0.040052350610494614, 0.048378292471170425, -0.017893923446536064, -0.03299994021654129, 0.04526452720165253, 0.05257825553417206, -0.01704838126897812, -0.04987850412726402, 0.004255060572177172, 0.022154690697789192, -0.0158899687230587, 0.023094290867447853, 0.01462775468826294, 0.03831601142883301, -0.014101676642894745, 0.01765582524240017, 0.051165878772735596, -0.00420940900221467, -0.0237688347697258, -0.05128426477313042, 0.03301410749554634, -0.02753707580268383, -0.048054005950689316, 0.02127215452492237, 0.04650184512138367, -0.0240122452378273, 0.012495599687099457, -0.002435286296531558, -0.026180509477853775, -0.03718573600053787, -0.02123280055820942, 0.0346754789352417, -0.01713358610868454, -0.009632525965571404, -0.01427326537668705, -0.01605289801955223, -0.025886133313179016, -0.009767119772732258, -0.028160851448774338, -0.016384266316890717, 0.03549111261963844, 0.02565157786011696, 0.03204440325498581, 0.017668813467025757, 0.09095723181962967, -0.04095436632633209, 0.02439507097005844, -0.004287331830710173, 0.01835831254720688, 0.019812872633337975, 0.006503804121166468, -0.02443786896765232, -0.02184358984231949, -0.0031295083463191986, -0.04851052537560463, -0.01811431162059307, -0.03370360657572746, -0.0057467129081487656, 0.03714480623602867, -0.006447133142501116, -0.04097955673933029, -0.04207519814372063, 0.000941371195949614, 0.018186725676059723, 0.06052481755614281, -0.024365456774830818, -0.025785384699702263, -0.036017678678035736, 0.011526287533342838, -0.05458063632249832, 0.007611253298819065, -0.05799350142478943, 0.013456255197525024, -0.01807180792093277, 0.018907709047198296, 0.01101427897810936, -0.022793225944042206, 0.021390024572610855, 0.00768366688862443, -0.010249611921608448, 0.019441360607743263, 0.02082744427025318, -0.05008472502231598, -0.030915699899196625, -0.039534442126750946, -0.038486022502183914, 0.004812327213585377, -0.012024127878248692, -0.025627965107560158, 0.011932430788874626, 0.019298994913697243, 0.02753746695816517, 0.016620397567749023, -0.02183680050075054, 0.0039591109380126, -0.005990615580230951, -0.01160184945911169, 0.015504683367908001, 0.06474366784095764, -0.025232840329408646, 0.005249954294413328, 0.0009335001814179122, -0.0547923669219017, 0.01141766831278801, 0.02281014807522297, -0.005465620197355747, -0.004690130241215229, 0.00017394902533851564, -0.015349229797720909, 0.04089927300810814, 0.04748690500855446, 0.027831843122839928, -0.05132834240794182, -0.005907182581722736, -0.020184382796287537, 0.033893149346113205, -0.008761993609368801, -0.026230884715914726, 0.07481224834918976, -0.01574110798537731, -0.0027855457738041878, 0.018591735512018204, 0.03876150771975517, -0.022954972460865974, -0.03236396238207817, 0.03431439772248268, 0.09112724661827087, -0.03385158255696297, 0.0055653853341937065, -0.007976467721164227, 0.03260638937354088, 0.014840074814856052, -0.04912761226296425, 0.025911320000886917, -0.02073928900063038, -0.02789008989930153, -0.0031830312218517065, -0.02207932434976101, 0.0014455084456130862, 0.05048142373561859, -0.020697573199868202, 0.010959574952721596, -0.009045152924954891, 0.008501463569700718, 0.0381585918366909, 0.0024470926728099585, 0.045588813722133636, 0.014106006361544132, -0.04562029615044594, 0.004478597082197666, 0.04324325546622276, 0.03769262880086899, 0.029338747262954712, 0.03258592635393143, 0.023712163791060448, 0.003382954280823469, -0.004383161198347807, -0.007679731119424105, -0.014065470546483994, -0.021994713693857193, 0.012612484395503998, -0.005804073065519333, 0.005056524649262428, -0.04476216062903404, -0.011551475152373314, -0.0665634423494339, -0.008693515323102474, 0.07090193778276443, -0.03142101690173149, 0.018074169754981995, 0.014141229912638664, 0.034040484577417374, 0.05973771587014198, 0.0571528822183609, 0.014943874441087246, 0.033508408814668655, 0.028952674940228462, -0.024932170286774635, 0.027422551065683365, 0.025639772415161133, -0.007447143085300922, -0.04538102075457573, -0.03109988011419773, 0.021512219682335854, -0.04082843288779259, 0.014589679427444935, -0.019024986773729324, 0.03206644207239151, 0.0064652361907064915, 0.021108439192175865, 0.0672505795955658, 0.018970675766468048, -0.004376569297164679, 0.0040740277618169785, -0.033804357051849365, -0.009142949245870113, 0.0507521852850914, -0.00088155159028247, 0.024208037182688713, -0.05147001892328262, -0.004743062425404787, -0.007965448312461376, -0.024847161024808884, -0.0001385295472573489, 0.063540980219841, -0.005238147918134928, -0.01683133840560913, 0.010071727447211742, -0.014035560190677643, -0.026858989149332047, -0.020168641582131386, 0.022322148084640503, 0.011535733006894588, 0.004873720929026604, -0.0443231575191021, -0.03368786349892616, 0.005870189517736435, 0.022707823663949966, 0.012822837568819523, 0.02537197433412075, -0.027740539982914925, 0.02444416657090187, -0.0017615288961678743, -0.01945769414305687, -0.0238333772867918, -0.02080102637410164, 0.069831483066082, 0.043303076177835464, -0.03029310517013073, -0.03722154721617699, 0.07842031121253967, -0.057256776839494705, 0.02725253812968731, -0.008332778699696064, -0.04658055678009987, -0.033032212406396866, -0.030559932813048363, 0.00021487820777110755, 0.042667098343372345, 0.010525096207857132, 0.044537246227264404, 0.020196978002786636, -0.06813764572143555, 0.0036104260943830013, -0.011118569411337376, -0.01491081528365612, -0.025467395782470703, 0.01671445555984974, 0.025272196158766747, -0.025838905945420265, 0.020529134199023247, 0.03321344032883644, 0.02932024747133255, -0.014424387365579605, -0.039529718458652496, -0.01744842529296875, 0.005314497277140617, -0.05680970475077629, 0.00578675651922822, 0.014954105950891972, -0.005667117424309254, -0.05651375651359558, 0.019708188250660896, -0.0047596655786037445, -0.0005113687948323786, 0.040914326906204224, -0.039194412529468536, -0.021265070885419846, 0.007210324518382549, 0.09784750640392303, -0.0022385113406926394, 0.013759681954979897, -0.06519703567028046, 0.01932172290980816, -0.031508781015872955, 0.0035210903733968735, -0.015613695606589317, -0.005586833227425814, 0.015562928281724453, 0.016014330089092255, 0.023735778406262398, -0.05907025188207626, 0.04065212234854698, 0.011116995476186275, -0.05414615944027901, 0.03648364543914795, 0.03580516204237938, -0.009240549989044666, 0.038822904229164124, 0.021526414901018143, -0.051738422363996506, 0.01016617938876152, 0.06728599965572357, -0.04924095422029495, 0.06629268079996109, -0.004479384049773216, -0.0030600959435105324, -0.023918580263853073, -0.055448018014431, -0.03436634689569473, 0.0031858349684625864, -0.024116339161992073, 0.032461561262607574, 0.022137966006994247, -0.011152808554470539, -0.00507207028567791, 0.017849845811724663, 0.01615758240222931, 0.03389408439397812, -0.002392783062532544, -0.03985242918133736, -0.07045800983905792, -0.0396178737282753, 0.007800945080816746, -0.013867909088730812, 0.047704536467790604, -0.002998849842697382, -0.024375688284635544, -0.025155704468488693, -0.025879835709929466, 0.04061434045433998, 0.03859936445951462, -0.008309410884976387, 0.007323175203055143, 0.02757682278752327, -0.03525261953473091, -0.0032058567740023136, 0.01774594932794571, -0.0012668368872255087, -0.04342665150761604, -0.03416327387094498, -0.026073463261127472, 0.03355091065168381, 0.039524998515844345, -0.011649861931800842, 0.027844436466693878, 0.03346039354801178, -0.03432069346308708, -0.06633675843477249, -0.035410039126873016, -0.0538848415017128, 0.021477587521076202, 0.0015616057207807899, -0.03266778588294983, -0.02024892531335354, 0.009426305070519447, 0.032516662031412125, -0.017013946548104286, 0.025106118991971016, 0.008448727428913116, -0.005250742193311453, 0.05984633415937424, 0.005194070748984814, -0.0017032836331054568, -0.006139377132058144, 0.04480171203613281, -0.010826555080711842, -0.018648358061909676, -0.03529984503984451, 0.025618519634008408, -0.03672134876251221, -0.031306102871894836, 0.01443619467318058, 0.05649486556649208, -0.01837090589106083, -0.0071681165136396885, -0.031160486862063408, -0.045573070645332336, -0.008009525947272778, -0.02223399095237255, -0.024629922583699226, -0.030323801562190056, 0.04286229610443115, -0.009397969581186771, -0.017272114753723145, 0.032853737473487854, -0.02477789856493473, -0.001403201837092638, -0.0008362933294847608, -0.02109190821647644, 0.025572868064045906, -0.01854328066110611, -0.0035970453172922134, -0.02580978348851204, -0.009980029426515102, -0.025496911257505417, 0.012189812958240509, 0.0008752548019401729, -0.010573110543191433, 0.016454318538308144, -0.056025754660367966, -0.014511755667626858, -0.011952894739806652, 0.003825550200417638, -0.006327100098133087, -0.02363896369934082, -0.035021211951971054, 0.01493915170431137, -0.027180124074220657, 0.07219179719686508, -0.052795495837926865, -0.016941532492637634, -0.023544512689113617, 0.02043086849153042, 0.0027666552923619747, 0.002901839790865779, -0.0008326530223712325, -0.014422420412302017, 0.006384558975696564, 0.008998861536383629, 0.043776318430900574, 0.07937191426753998, 0.007672647014260292, 0.0012467658380046487, 0.022034065797924995, 0.006863508839160204, -0.021745989099144936, 0.004170841071754694, -0.028045935556292534, 0.013697895221412182, -0.05102924630045891, 0.04050099849700928, -0.0076348669826984406, -0.05588407814502716, -0.00810555275529623, -0.027732668444514275, 0.0246102437376976, 0.05048771947622299, -0.05566132441163063, -0.018931319937109947, -0.006872954312711954, 0.035375405102968216, -0.0059122988022863865, -0.03821112960577011, 0.01859877072274685, -0.015879735350608826, -0.012507504783570766, 0.036680418998003006, -0.022497177124023438, -0.011802165769040585, 0.008688005618751049, 0.02209310047328472, 0.023890836164355278, 0.028930634260177612, 0.0206361785531044, 0.02565629966557026, -0.033187270164489746, 0.008853296749293804, 0.0095207579433918, 0.010785773396492004, 0.017449408769607544, -0.018055083230137825, -0.03205227106809616, 0.011249424889683723, -0.038227856159210205, -0.007723415270447731, 0.06609905511140823, 0.003201134270057082, -0.006514036562293768, 0.02279755473136902, -0.006778502371162176, 0.028126612305641174, 0.008336959406733513, -0.042415227741003036, -0.09749960899353027, 0.017626309767365456, 0.021092303097248077, 0.06581491231918335, 0.028847988694906235, -0.048387736082077026, -0.011485358700156212, 0.051847830414772034, 0.031203780323266983, 0.004884740337729454, -0.08841175585985184, 0.050009164959192276, 0.041477005928754807, 0.02786332741379738, 0.026370985433459282, 0.08446524292230606, 0.04306615889072418, -0.027827123180031776, 0.0002770590945146978, -0.005222455598413944, 0.016004884615540504, 0.032872430980205536, -0.002060036640614271, -0.0008398352656513453, -0.0698220357298851, -0.0011292912531644106, 0.032189227640628815, -0.018285898491740227, -0.03592952340841293, 0.0010652163764461875, 0.012232118286192417, -0.029788967221975327, 0.012777776457369328, -0.04059860110282898, 0.06140243262052536, 0.01350112073123455, -0.02221510000526905, -0.038490746170282364, 0.0027186423540115356, -0.031285639852285385, -0.06379915028810501, 0.0021220205817371607, -0.062300510704517365, 0.03836480900645256, 0.019452380016446114, -0.013178016059100628, -0.012719530612230301, -0.011346829123795033, 0.0017867162823677063, -0.040597815066576004, 0.010290540754795074, 0.0158317219465971, 0.04621062055230141, 0.01734216697514057, 0.004925276152789593, -0.016968294978141785, -0.0018874648958444595, 0.03148870915174484, -0.06702310591936111, 0.04382255673408508, 0.01584431529045105, 0.035047974437475204, -0.038527149707078934, -0.0037784718442708254, -0.026160044595599174, -0.0602855384349823, -0.011728375218808651, -0.03156663104891777, 0.002178691793233156, -0.0024205774534493685, 0.0349566675722599, -0.032461561262607574, 0.005801711697131395, -0.008994188159704208, -0.03145250305533409, 0.009314537048339844, 0.010939701460301876, 0.04207992181181908, -0.004412480629980564, -0.019956814125180244, 0.028014056384563446, 0.03370045870542526, 0.016315001994371414, 0.03716684505343437, -0.03449149429798126, 0.014281135983765125, -0.05224137753248215, 0.0206062700599432, -0.004451217595487833, 0.007264142856001854, 0.00025187188293784857, -0.008440068922936916, -0.07432739436626434, 0.029657913371920586, -0.013232719153165817, -0.03275593742728233, 0.026479607447981834, 0.0037875233683735132, -0.004654120188206434, 0.01173250749707222, 0.03589174523949623, -0.02001279592514038, 0.001224727020598948, 0.06134969741106033, -0.0013018627651035786, 0.02493925206363201, 0.0348685160279274, -0.019972654059529305, -0.008291603066027164, 0.0399913527071476, 0.02571611851453781, 0.004793437197804451, 0.0014167793560773134, 9.209066047333181e-05, 0.014222104102373123, 0.02342565916478634, -0.015688469633460045, -0.003960685804486275, -0.00823621079325676, 0.07655645906925201, -0.0698818564414978, 0.0011690397514030337, 0.019501181319355965, 0.00748945027589798, -0.03667490929365158, 0.042966198176145554, -0.009558046236634254, -0.02027568779885769, 0.010190579108893871, -0.030428141355514526, -0.023950261995196342, -0.026913298293948174, -0.02296048402786255, 0.031194334849715233, 0.0021283174864947796, -0.01876170001924038, -0.02248586341738701, 0.02188136987388134, -0.025193190202116966, 0.0011318493634462357, -0.02534775622189045, -0.000571827928069979, -0.04098900035023689, 0.0006152167916297913, 0.03878354653716087, 0.0349598191678524, -0.0015238249907270074, 0.00802487414330244, 0.012004056014120579, 0.02877761609852314, 0.0349094457924366, -0.0443735271692276, -0.024454448372125626, -0.03296392783522606, 0.023518536239862442, -0.06033354997634888, -0.0667271614074707, -0.019117863848805428, -0.0035687098279595375, 0.034978706389665604, 0.07175987958908081, 0.02156967855989933, 0.03735889866948128, -0.01371639221906662, -0.017059598118066788, -0.05764245614409447, 0.0776316374540329, 0.032934218645095825, 0.006652369629591703, -0.005681875627487898, -0.011630184017121792, -0.008852214552462101, -0.02662600576877594, 0.014682557433843613, 0.03399011120200157, 0.022325294092297554, 0.01656372658908367, 0.04178554564714432, -0.004755656234920025, -0.024500837549567223, 0.012866717763245106, 0.007858403027057648, 0.03728963062167168, -0.018086763098835945, -0.04178554564714432, 0.04872068017721176, -0.0602603517472744, -0.049880076199769974, 0.008728541433811188, -0.027844436466693878, 0.0058355568908154964, -0.003779746126383543, 0.06180306524038315, -0.036351412534713745, 0.01612924598157406, -0.05828944966197014, 0.03847185894846916, -0.04505043476819992, 0.0023109246976673603, 0.006900699809193611, 0.027716927230358124, 0.006321984343230724, 0.04912131279706955, 0.013094582594931126, 0.026945175603032112, 0.039490364491939545, 0.004511655308306217, -0.0076702856458723545, -0.031276192516088486, -0.020605482161045074, 0.011501099914312363, 0.05421857163310051, -0.0013908051187172532, 0.02382078394293785, 0.027181895449757576, 0.01507768128067255, 0.032491471618413925, -0.07775285094976425, 0.028947727754712105, 0.031005429103970528, 0.07742542028427124, -0.00014482633559964597, -0.035025935620069504, -0.013399584218859673, 0.0005194857367314398, -0.018679646775126457, -0.03628215193748474, -0.011076066642999649, 0.0019063553772866726, 0.01285569928586483, -0.004072060342878103, 0.009052040055394173, -0.016322871670126915, -0.006320410408079624, 0.04001988470554352, 0.05650116130709648, 0.01681559719145298, -0.0001432521385140717, 0.03908422216773033, -0.027051040902733803, 0.024593716487288475, -0.042314477264881134, 0.006334577687084675, 0.02757997065782547, -0.049089834094047546, 0.03867807611823082, -0.020651133731007576, -0.00013380694144871086, 0.017705807462334633, 0.05580536648631096, -0.013909625820815563, -0.03120259754359722, 0.0460783913731575, -0.06435327231884003, 0.0381208099424839, 0.030517427250742912, 0.026204513385891914, 0.013538113795220852, -0.08087606728076935, -0.03893309831619263, -0.012713233940303326, -0.023277685046195984, -0.009895022958517075, -0.016714848577976227, -0.02043294720351696, 0.03652457520365715, -0.026081334799528122, -0.012436174787580967, -0.022780632600188255, 0.019109204411506653, 0.02466721087694168, -0.04137468338012695, -0.019806575030088425, -0.0597703792154789, 0.022621244192123413, 0.003512038616463542, -0.007027225568890572, -0.003242850536480546, -0.007434943225234747, 0.0026749579701572657, -0.008576041087508202, 0.019727865234017372, 0.023708228021860123, -0.010384992696344852, -0.020111970603466034, -0.04226095601916313, 0.030848009511828423, -0.0667995736002922, -0.04358091950416565, -0.02885822020471096, 0.018564533442258835, 0.0034018447622656822, -0.0021889242343604565, 0.03212665393948555, 0.003401205176487565, -0.008883994072675705, 0.012370058335363865, -0.0032306506764143705, -0.019526368007063866, 0.007310581859201193, -0.013120952062308788, 0.03380799666047096, 0.02885664626955986, -0.09991757571697235, -0.02313443273305893, -0.0379948727786541, 0.046857383102178574, 0.03529354929924011, 0.025920765474438667, 0.005342832766473293, 0.0047446368262171745, -0.020464591681957245, 0.020189106464385986, 0.018772326409816742, -0.014987952075898647, 0.03518020734190941, 0.025775939226150513, 0.0005745827802456915, 0.02919195219874382, -0.05365815758705139, -0.05523939058184624, -0.01004850771278143, 0.021895144134759903, -0.004717875272035599, -0.02888025902211666, 0.03391139954328537, 0.016407879069447517, -0.05486714094877243, 0.027438294142484665, 0.012648691423237324, 0.0666106715798378, 0.014282513409852982, 0.01645038276910782, 0.04662936180830002, 0.017634181305766106, -0.0633394792675972, 0.036982662975788116, -0.059492141008377075, -0.012924962677061558, 0.00610946724191308, 0.002208638470619917, -0.029758663848042488, -0.026185231283307076, 0.04242467135190964, 0.023741286247968674, -0.010693536140024662, -0.0007729318458586931, -0.02246205136179924, 0.014551111496984959, 0.0005336535978130996, 0.00850903894752264, -0.013227997347712517, -0.002758784219622612, 0.014649498276412487, -0.0380358025431633, 0.01586163230240345, -0.014817937277257442, 0.033782318234443665, -0.024648813530802727, -0.007702557370066643, 0.041836708784103394, -0.036197926849126816, 0.03209162503480911, 0.042018525302410126, -0.033190418034791946, 0.03966982290148735, -0.01597261242568493, -0.008011887781322002, 0.012568407692015171, 0.044850513339042664, 0.03543994948267937, 0.016993198543787003, -0.04036876559257507, 0.05738743394613266, 0.04982025548815727, -0.06830766052007675, 0.02945012040436268, 0.0021716079208999872, -0.009838745929300785, -0.012451129965484142, 0.0006147248204797506, -0.01835634373128414, 0.022898156195878983, 0.008229028433561325, 0.06228791922330856, 0.025273770093917847, 0.004374699667096138, 0.006393610034137964, 0.032406073063611984, -0.02240479178726673, -0.045812349766492844, -0.018316596746444702, 0.023669660091400146, -0.00254233181476593, -0.04527397081255913, 0.030148277059197426, 0.018632223829627037, -0.0026907003484666348, -0.039112553000450134, -0.041061416268348694, 0.015946639701724052, -0.04325270280241966, 0.004539990797638893, -0.019989969208836555, 0.04532906785607338, -0.016484228894114494, -0.023066742345690727, -0.008261004462838173, 0.00483279163017869, -0.013040273450314999, 0.031263597309589386, -0.008632909506559372, -0.03172611817717552, -0.012401541694998741, 0.0006989444955252111, -0.0014144181041046977, 0.015558402985334396, -0.02600577287375927, 0.019806575030088425, -0.008675412274897099, 0.008472341112792492, -0.03296097740530968, 0.025017913430929184, -0.008209842257201672, 0.0010011907434090972, -0.016039123758673668, 0.04483949393033981, -0.04811067879199982, -0.014651860110461712, -0.020371712744235992, 0.010679367929697037, -0.03960685431957245, -0.012844383716583252, -0.06382434070110321, -0.042352259159088135, 0.0004116531345061958, -0.07656905800104141, -0.042671818286180496, 0.00515983160585165, 0.03071105293929577, 0.01958402246236801, 0.059144243597984314, 0.033380404114723206, 0.0712403878569603, -0.011990676634013653, -0.03514872118830681, -0.005546887870877981, 6.296797437244095e-06, -0.01492104772478342, 0.01597379334270954, 0.0024667703546583652, -0.013663459569215775, -0.00015387799066957086, 5.9819572925334796e-05, 0.0206582173705101, 0.010630961507558823, 0.06446661055088043, 0.0032680376898497343, -0.015351592563092709, -0.0036623745691031218, 0.03622547537088394, 0.040469516068696976, -0.007285394240170717, 0.007851908914744854, -0.02345556952059269, -0.010842937044799328, -0.0015647541731595993, 0.01119054015725851, 0.014537731185555458, 0.027627592906355858, 0.08039750903844833, 0.056650709360837936, 0.03744705393910408, -0.02064247615635395, 0.008913903497159481, 0.021239884197711945, -0.0037906719371676445, -0.009450704790651798, -0.008861168287694454, -0.020189106464385986, -0.024182848632335663, 0.00046911140088923275, -0.0017048579175025225, 0.029964882880449295, -0.02665453962981701, 0.02819863334298134, 0.0005627762293443084, -0.053138673305511475, -0.011952894739806652, -0.009741145186126232, 0.0443357452750206, 0.0476604588329792, -0.01060695480555296, -0.027830269187688828, 0.046180713921785355, -0.032533977180719376, 0.0029012493323534727, -0.03462136536836624, -0.002422692719846964, -0.0020165492314845324, -0.05325654149055481, 0.005871762987226248, 0.005980383139103651, 0.008321020752191544, 0.05109851062297821, 0.026844821870326996, -0.036639489233493805, -0.02236386388540268, -0.005984712392091751, 0.020189106464385986, -0.01363965030759573, 0.014434619806706905, 0.04634442925453186, -0.07150642573833466, -0.02823483757674694, -0.059141095727682114, -0.004051988944411278, -0.0019661749247461557, 0.03004162572324276, -0.011372015811502934, -0.04496070742607117, -0.01713516004383564, 0.05986316129565239, 0.054760098457336426, 0.06777872890233994, -0.034070394933223724, -0.051545582711696625, 0.0005714343278668821, -0.03373509272933006, -0.0076128276996314526, -0.041706837713718414, -0.04687650874257088, 0.022655876353383064, 0.0008909967727959156, -0.021380776539444923, -0.019860098138451576, -0.04833579063415527, -0.024119094014167786, 0.018563155084848404, 0.05007331073284149, 0.03592952340841293, -0.0035027903504669666, -0.012814769521355629, 0.005860744044184685, -0.041683223098516464, -0.0008044158457778394], index=0, object='embedding')], model='voyage-finance-2', object='list', usage=Usage(prompt_tokens=None, total_tokens=6), meta={'usage': {'credits_used': 2}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/anthropic/voyage-large-2-instruct.md # voyage-large-2-instruct {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `voyage-large-2-instruct` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An instruction-tuned, general-purpose text embedding model optimized for tasks such as clustering, classification, and retrieval. It is designed to perform exceptionally well on the Massive Text Embedding Benchmark (MTEB), ranking first in several key areas. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [voyage-large-2-instruct.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-a7e98534c6e1f5a72876ae258894308c390bc56e%2Fvoyage-large-2-instruct.json?alt=media\&token=5efe7ce8-7a58-47bd-807e-81912ced7b75) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="voyage-large-2-instruct"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "voyage-large-2-instruct", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.03671441972255707, 0.002773779910057783, -0.011865917593240738, 0.03921204060316086, 0.0628889948129654, 0.011656975373625755, 0.0020900098606944084, 0.002376382937654853, 0.006883290130645037, 0.006720184814184904, -0.012811696156859398, 0.026598026975989342, -0.02126833237707615, -0.004691239912062883, -0.055087000131607056, -0.042521897703409195, -0.013980692252516747, -0.019960040226578712, 0.013892094604671001, -0.004676965996623039, -0.002246932126581669, -0.006324880290776491, 0.006354904733598232, 0.07155899703502655, 0.04829808697104454, -0.040650706738233566, -0.005283612757921219, -0.0571434460580349, -0.038971103727817535, 0.018540389835834503, 0.018107369542121887, 0.03350992873311043, 0.013146399520337582, 0.015538609586656094, 0.008222836069762707, -0.02103896252810955, 0.038168806582689285, -0.027608223259449005, 0.0042088753543794155, 0.02329951710999012, -0.02128531225025654, -0.0001676758547546342, 0.024676261469721794, 0.012053695507347584, 0.02132493630051613, 0.0070019131526350975, -0.010757403448224068, -0.03847126662731171, -0.0015586399240419269, -0.01550358533859253, -0.03266935795545578, -0.02283770777285099, -0.006839114706963301, -0.051231034100055695, -0.018777264282107353, -0.005221102386713028, 0.010743190534412861, -0.0071330866776406765, -0.014005795121192932, -0.034320224076509476, -3.2485764677403495e-05, -0.019023308530449867, -0.019702123478055, -0.03250810131430626, 0.016037631779909134, -0.021856028586626053, 0.05038406699895859, -0.003004441037774086, -0.016607055440545082, 0.011917230673134327, -0.008754913695156574, 0.04357128590345383, 0.020918862894177437, 0.025733893737196922, -0.04195487126708031, -0.006851543672382832, -0.024190327152609825, -0.031003234907984734, 0.016851253807544708, 0.043438881635665894, -0.03366909548640251, 0.04568273574113846, -0.024544471874833107, -0.0023505419958382845, 0.010619768872857094, -0.0643387958407402, 0.025440292432904243, -0.00899381935596466, 0.011275759898126125, 0.04912290722131729, 0.04743758216500282, -0.05060494691133499, 0.05246450752019882, -0.0003577740862965584, 0.008307003416121006, -0.0071627418510615826, -0.05375520512461662, -0.005060888361185789, -0.045655298978090286, 0.018526608124375343, -0.017911653965711594, -0.014070519246160984, -0.0059895627200603485, -0.034230154007673264, 0.04403199627995491, -0.029186490923166275, -0.031910866498947144, -0.011127359233796597, -0.0023291618563234806, 0.04406546428799629, 0.016603240743279457, -0.06500992178916931, -0.015948541462421417, 0.07229657471179962, -0.013226136565208435, 0.0770847499370575, 0.016746167093515396, 0.008181613869965076, -0.006104862783104181, -0.03903607651591301, 0.02585543878376484, -0.032939765602350235, 0.0017463559051975608, 0.022217001765966415, -0.007296622730791569, 0.0031004217453300953, 0.08759196102619171, 0.019186288118362427, -0.05054391548037529, 0.0010108733549714088, -0.028127256780862808, 0.032830554991960526, 0.07918258011341095, -0.010381970554590225, -0.0005510274786502123, -0.02616826817393303, -0.025783607736229897, 0.028589442372322083, -0.026730863377451897, 0.04346398264169693, 0.024613382294774055, 0.049200184643268585, 0.024977369233965874, -0.041146665811538696, 0.025218306109309196, 0.03163916617631912, -0.012939670123159885, 0.016072578728199005, 0.007885796017944813, 0.009954486973583698, -0.02369615063071251, -0.010720302350819111, -0.003355262568220496, -0.06647177785634995, -0.0586860254406929, -0.01734789088368416, -0.03160988166928291, -0.03575107827782631, 0.011891267262399197, 0.007116843014955521, -0.030104458332061768, -0.005184924695640802, 0.061742153018713, -0.03659644350409508, 0.04123329743742943, -0.002401485573500395, 0.04137997329235077, 0.056729987263679504, 0.02883283980190754, -0.0007570167654193938, -0.018712015822529793, -0.01495854090899229, -0.0209701769053936, -0.016540177166461945, -0.0027748257853090763, -0.02218629978597164, -0.018491292372345924, 0.05212857946753502, -0.019354993477463722, -0.026805123314261436, 0.022871455177664757, -0.02152705006301403, -0.0103132463991642, 0.06564351916313171, 0.04217242822051048, -0.050693560391664505, 0.03911679983139038, -0.007338337600231171, 0.031476061791181564, -0.019859569147229195, 0.021353483200073242, 0.005730353761464357, 0.016190461814403534, -0.022968420758843422, -0.04246357083320618, -0.00555432727560401, 0.03709086775779724, 0.022240212187170982, 0.003157025668770075, -0.01691228523850441, 0.018107738345861435, -0.012137862853705883, 0.022582774981856346, -0.012193236500024796, 0.050697483122348785, 0.016785049811005592, 0.013812357559800148, 0.024350296705961227, -0.008098061196506023, -0.010806931182742119, 0.018290776759386063, 0.0016817383002489805, -0.03759520500898361, 0.0252256877720356, 0.043343886733055115, 0.04402583837509155, -0.042509838938713074, 0.016580229625105858, -0.025652432814240456, -0.010642456822097301, -0.011107978411018848, -0.023055817931890488, -0.04954522103071213, 0.024811802431941032, 0.03137534484267235, 0.0028754824306815863, -0.012920105829834938, -0.014156164601445198, 0.0320555754005909, 0.02064802497625351, 0.011785442009568214, -0.006067947018891573, 0.024687211960554123, -0.01982123963534832, 0.024356326088309288, -0.010516282171010971, 0.06345798820257187, -0.007579473312944174, 0.07675205171108246, 0.026710189878940582, -0.03383127972483635, 0.0030879934784024954, 0.004062197171151638, -0.01823091320693493, 0.05037693306803703, -0.06543421000242233, -0.008802903816103935, 0.019868245348334312, -0.02548852749168873, -0.02440960705280304, 0.0059450180269777775, 0.03109995275735855, -0.020905574783682823, -0.033592741936445236, -0.05035293847322464, 0.004756949841976166, -0.06419543921947479, 0.016474466770887375, -0.057072319090366364, -0.0368560254573822, -0.009027105756103992, 0.01822919026017189, 0.04416685923933983, 0.020378606393933296, -0.0028970164712518454, 0.04731994867324829, -0.02932947687804699, -0.019969269633293152, -0.04463839530944824, 0.022401396185159683, 0.012450415641069412, 0.024162517860531807, -0.006138271652162075, 0.003700423985719681, 0.01853165216743946, 0.027010438963770866, 0.022407302632927895, 0.03281455859541893, 0.03311579301953316, -0.04965006187558174, 0.02726687677204609, 0.015746736899018288, 0.01506141945719719, -0.018169879913330078, 0.0029987806919962168, 0.039811305701732635, -0.009593391790986061, -0.017576275393366814, 0.034143153578042984, 0.00013775686966255307, 0.0027259739581495523, -0.002810633974149823, 0.00899240467697382, -0.015876678749918938, -0.021613122895359993, 0.038425739854574203, 0.010043269023299217, -0.04625578969717026, -0.0208616741001606, 0.032990772277116776, -0.004167037550359964, -0.009210360236465931, 0.024112435057759285, -0.028418155387043953, -0.025161700323224068, -0.01933985762298107, 0.024608859792351723, -0.041107289493083954, 0.04436866566538811, 0.09518132358789444, -0.025048524141311646, 0.012903370894491673, -0.02681269310414791, -0.04525390639901161, -0.0297707412391901, 0.008416520431637764, -0.004261541645973921, 0.0012462720042094588, -0.0011301108170300722, -0.005899119656533003, -0.035171255469322205, -0.00798103865236044, 0.0007798736914992332, 0.0038008345291018486, 0.0020936091896146536, 0.015197339467704296, 0.0037070687394589186, 0.0374675914645195, 0.026287874206900597, -0.04339519888162613, -0.017155252397060394, -0.03821704164147377, 0.01905437745153904, 0.01601376011967659, 0.016474220901727676, -0.006715201307088137, 0.00834687240421772, -0.038653936237096786, 0.00974572915583849, -0.015096775256097317, -0.010153278708457947, 0.0022588681895285845, 0.011083305813372135, 0.040272872895002365, 0.044456277042627335, -0.059757690876722336, -0.010272393003106117, 0.04805973544716835, -0.009026366285979748, 0.014799604192376137, 0.030460817739367485, -0.05090765282511711, -0.009943043813109398, -0.03339487314224243, -0.007480093277990818, 0.063599593937397, -0.012224491685628891, 0.014037788845598698, -0.0026680780574679375, -0.0627821832895279, -0.03583179786801338, 0.039652690291404724, -0.004750797059386969, 0.008922879584133625, 0.030957581475377083, -0.003927331883460283, -0.03417859598994255, -0.05452588200569153, 0.026611993089318275, 0.019128086045384407, 0.02438991889357567, -0.022002216428518295, 0.029686084017157555, 0.032306600362062454, 0.0038431643042713404, -0.020832234993577003, -0.01810835301876068, -0.04015929624438286, -0.023907924070954323, 0.007074267603456974, -0.006286256946623325, -0.001311612781137228, -0.025979751721024513, -0.018676115199923515, -0.04513970762491226, -0.00990840420126915, 0.04054838791489601, -0.041832007467746735, 0.018093587830662727, -0.02777988277375698, 0.024718713015317917, -0.01788095198571682, -0.015272123739123344, 0.0037259573582559824, 0.0005090667400509119, 0.03575390577316284, 0.030905775725841522, -0.03135663643479347, -0.013661248609423637, -0.007716415449976921, 0.05739742890000343, -0.0214819498360157, 0.014804157428443432, 0.012838875874876976, -0.007231805007904768, 0.02533397451043129, -0.013815801590681076, 0.059244681149721146, -0.05335934832692146, 0.04527432844042778, 0.013262313790619373, 0.008767465129494667, 0.002682536607608199, -0.008499211631715298, 0.0050085680559277534, 0.008761066012084484, 0.03910498693585396, 0.016096942126750946, -0.024926425889134407, -0.011211526580154896, -0.024984998628497124, 0.012834461405873299, 0.001519740093499422, -0.05313883349299431, 0.0075037190690636635, 0.007744440343230963, 0.022584497928619385, 0.015371181070804596, 0.07935189455747604, -0.045791272073984146, -0.05667289346456528, -0.03201275318861008, 0.05919841676950455, -0.01882796175777912, -0.0007235465454868972, 0.012561331503093243, -0.015163099393248558, 0.007714384701102972, 0.07204752415418625, 0.04294027388095856, -0.04904956743121147, 0.07396319508552551, -0.0024767934810370207, -0.021285559982061386, 0.023161858320236206, 0.0006822010618634522, 0.005140380002558231, 0.024754153564572334, -0.06520385295152664, 0.044503532350063324, -0.025524459779262543, -0.0018881121650338173, -0.018904253840446472, -0.00845829676836729, -0.04740780591964722, -0.03440304100513458, 0.046540290117263794, -0.03551770746707916, 0.02636859565973282, -0.03574639931321144, 0.041916172951459885, 0.07780439406633377, 0.024107391014695168, 0.02382880076766014, -0.007027754094451666, -0.02832857333123684, 0.037478234618902206, 0.006173771806061268, 0.044500574469566345, -0.005513043608516455, -0.0161442868411541, 0.04547625780105591, 0.038514088839292526, -0.005840054713189602, 0.03379787132143974, 0.014693409204483032, -0.0426124632358551, -0.014106819406151772, 0.0012925396440550685, -0.002115758368745446, 0.006941617466509342, 0.014027206227183342, 0.04609805345535278, -0.09030254930257797, -2.4364324417547323e-05, 0.026422740891575813, -0.002948821522295475, -0.020245030522346497, 0.018520822748541832, -0.027251863852143288, 0.03330479934811592, -0.03126733377575874, -0.02033313550055027, -0.019581841304898262, -0.030301282182335854, -0.026439106091856956, 0.015917040407657623, 0.058405470103025436, -0.0012152629205957055, -0.026212936267256737, -0.034299060702323914, -0.042660702019929886, -0.02034556306898594, 0.01691056229174137, 0.011257302016019821, -0.02327732741832733, 0.006777711678296328, 0.08427595347166061, 0.005824795924127102, 0.008204501122236252, 0.031184982508420944, -0.023900724947452545, 0.041486505419015884, -0.036153823137283325, -0.04145872965455055, -0.013261205516755581, -0.00974572915583849, 0.003364245407283306, 0.030476262792944908, -0.007206918206065893, 0.007862785831093788, -0.007846173830330372, 0.027641449123620987, -0.031702909618616104, 0.034495946019887924, -0.046657495200634, -0.031622182577848434, -0.06065978482365608, -0.030079849064350128, -0.0046265143901109695, 0.017041120678186417, 0.04802922159433365, -0.017348874360322952, 0.034618012607097626, 0.02908165007829666, 0.011830724775791168, 0.009412012062966824, 5.451210017781705e-05, 0.01069827564060688, 0.005100511014461517, 0.02284204587340355, 0.0075652459636330605, 0.03278256580233574, 0.01294622290879488, 0.008693387731909752, 0.010135804302990437, -0.03775682672858238, 0.01708468236029148, 0.04828627407550812, 0.051953598856925964, -0.035784054547548294, -0.04876340925693512, 0.010332544334232807, -0.03183531016111374, 0.03016069531440735, -3.740785177797079e-05, 0.0020785967353731394, 0.07719700783491135, -0.03632597625255585, 0.0593581385910511, -0.020343225449323654, -0.037507399916648865, -0.0646447092294693, -0.03817717358469963, -0.023844797164201736, -0.058841560035943985, 0.01589932106435299, 0.03730350360274315, -0.026155715808272362, -0.033087242394685745, 0.0511837862432003, -0.03960704058408737, 0.03690309077501297, -0.045143403112888336, 0.034425314515829086, -0.0478229857981205, -0.03318863734602928, 0.06627489626407623, -0.012192252092063427, 0.007638092152774334, 0.004771838895976543, 0.050013311207294464, 0.0012482409365475178, -0.008978376165032387, -0.009761849418282509, -0.005529963411390781, 0.02274889498949051, 0.0020022429525852203, 0.002159319119527936, -0.076609306037426, 0.01324139442294836, 0.026347046718001366, 0.07759176194667816, 0.008003495633602142, -0.03218847140669823, 0.004949157126247883, -0.009989003650844097, -0.008952288888394833, -0.007220699451863766, -0.03476715087890625, 0.009611906483769417, 0.03797927126288414, 0.01028961967676878, 0.04882413521409035, 0.05797533690929413, 0.04480685293674469, -0.024613289162516594, -0.010109225288033485, -0.06475889682769775, -0.012002998031675816, 0.011836632154881954, -0.0016016466543078423, 0.017576029524207115, -0.019802534952759743, 0.02601248398423195, 0.012281588278710842, -0.037675488740205765, -0.01104515977203846, 0.04555587098002434, 0.014643697999417782, -0.01428886130452156, 0.00661725178360939, 0.029204949736595154, 0.02863786369562149, 0.04377278313040733, 0.022974325343966484, -0.04664316028356552, 0.020882440730929375, 0.008736701682209969, 0.017651520669460297, 0.002709115855395794, 0.023119036108255386, -0.027614159509539604, -0.023855872452259064, 0.03988292068243027, -0.009105858393013477, 0.021539045497775078, 0.02170664444565773, -0.02905605360865593, -0.002300828928127885, -0.005732260644435883, -0.0039044443983584642, -0.0034828216303139925, 0.021050775423645973, -0.010879532434046268, 0.04445726051926613, -0.01233609952032566, 0.034147460013628006, -0.007722752634435892, 0.0055277482606470585, -0.04481558874249458, 0.036159612238407135, 0.031015047803521156, -0.05378203094005585, 0.021327396854758263, 0.08981821686029434, 0.023497790098190308, -0.0036098577547818422, 0.040165696293115616, -0.036640990525484085, -0.043712180107831955, 0.030396755784749985, -0.0058024004101753235, 0.06961896270513535, -0.05982150137424469, -0.04687855765223503, 0.015465193428099155, 0.04991868510842323, -0.006651460193097591, -0.03076697140932083, -0.020286375656723976, -0.012492746114730835, -0.0476088747382164, 0.016315974295139313, 0.03285098448395729, -0.019082926213741302, 0.008785430341959, 0.0030083786696195602, -0.025950221344828606, -0.05201266333460808, 0.043154630810022354, -0.016449179500341415, -0.0051868935115635395, 0.02086404524743557, 0.01271817646920681, -0.010849999263882637, 0.023212555795907974, -0.0228724405169487, -0.05498658865690231, -0.03854263573884964, 0.02347773313522339, -0.029816025868058205, -0.026211952790617943, -0.07801444083452225, -0.011398880742490292, -0.030489427968859673, -0.018014710396528244, 0.08153927326202393, 0.027401987463235855, 0.001569837681017816, 0.019062252715229988, 0.055958203971385956, -0.014613672159612179, -0.07708380371332169, -0.03450972959399223, -0.05217558518052101, -0.06267214566469193, -0.0021204345393925905, -0.016728200018405914, 0.050935711711645126, -0.01433385256677866, 0.044450368732213974, 0.05614524707198143, -0.05241135135293007, -0.06332951784133911, -0.0035552221816033125, -0.002987705869600177, -0.01691228523850441, 0.018723612651228905, 0.04657929763197899, -0.018956921994686127, 0.02115364745259285, 0.03488478809595108, 0.012653697282075882, 0.042262010276317596, 0.0231244508177042, -0.013843551278114319, 0.011063125915825367, -0.03082665428519249, 0.046572525054216385, -0.0025523321237415075, -0.0034181734081357718, -0.018859462812542915, -0.001081136055290699, 0.051947690546512604, -0.02541666477918625, 0.039588458836078644, 0.03871438279747963, -0.03272325545549393, -0.00596606032922864, -0.010763274505734444, 0.015606456436216831, 0.03721490502357483, -0.02751125954091549, -0.04720987752079964, 0.04312879219651222, 0.006219793576747179, 0.008839573711156845, -0.01549423299729824, 0.04895506054162979, -0.04293633624911308, 0.018714753910899162, 0.05584302917122841, 0.03767511993646622, 0.05940788984298706, -0.009825958870351315, 0.028547050431370735, 0.0011217433493584394, -0.027311177924275398, 0.027906442061066628, 0.03808938339352608, 0.017174633219838142, -0.06395536661148071, -0.06880977004766464, 0.013526136986911297, 0.00570422038435936, 0.0001932533923536539, 0.056627608835697174, -0.009782153181731701, 0.007331199944019318, -0.08482131361961365, -0.05301479622721672, 0.01038363203406334, 0.08295812457799911, 0.009122347459197044, 0.01732303388416767, -0.005332587752491236, -0.010487795807421207, -0.0029485137201845646, 0.017338907346129417, -0.005990639794617891, -0.015889476984739304, -0.028933988884091377, 0.017602363601326942, -0.04677679389715195, -0.013900215737521648, 0.006957614328712225, -0.03524213284254074, -0.004299934022128582, -0.012541843578219414, 0.012033145874738693, 0.07265392690896988, 0.026224380359053612, 0.0012622688664123416, -0.006348506081849337, -0.019161740317940712, 0.05562472343444824, -0.02158285304903984, 0.038666918873786926, 0.0052995942533016205, -0.03238412365317345, 0.09679453819990158, 0.001047050696797669, -0.012510957196354866, 0.005767207592725754, 0.03687601909041405, 0.0076938350684940815, 0.01065077818930149, -0.013181837275624275, -0.01670389622449875, -0.01965007372200489, -0.007829193025827408, -0.0183731596916914, -0.01526437234133482, 0.0011586588807404041, 0.017039215192198753, 0.03401493281126022, 0.017257817089557648, 0.042404383420944214, -0.029183045029640198, -0.03350955620408058, -0.02477777935564518, 0.02105046808719635, 0.026390254497528076, -0.028355151414871216, -0.022457508370280266, -0.02450367994606495, 0.016347723081707954, 0.00745646795257926, 0.02702225185930729, -0.01823091320693493, -0.02502978965640068, -0.034755583852529526, 0.014049417339265347, 0.006497153080999851, 0.02050713077187538, 0.007977962493896484, 0.02007201872766018, -0.04727884754538536, -0.027257895097136497, -0.02675498090684414, -0.008475277572870255, -0.011537368409335613, -0.009811685420572758, 0.004989518318325281, 0.0005414294428192079, 0.0340261310338974, -0.035423386842012405, -0.040856264531612396, -0.017227791249752045, -0.0040941135957837105, -0.023101316764950752, -0.01180069986730814, -0.016339724883437157, 0.057461414486169815, 0.02464020624756813, 0.009578255005180836, 0.05212291702628136, 0.02022927813231945, 0.006582658737897873, 0.011290526017546654, -0.0017323280917480588, -0.0525750108063221, 0.01254934910684824, 0.036653295159339905, 0.007454006467014551, -0.03352531045675278, 0.0014903767732903361, -0.0376778282225132, 0.03990728408098221, 0.01531851477921009, -0.024814201518893242, -0.004683856852352619, -0.06117168068885803, -0.010540153831243515, 0.06691280007362366, -0.04059022665023804, 0.016495877876877785, -0.003997164312750101, 0.00820081029087305, 0.012596108950674534, -0.0029131362680345774, 0.0054832035675644875, 0.0025636376813054085, 0.0010114886099472642, -0.011272559873759747, 0.029272135347127914, 0.0026666629128158092, 0.008962132968008518, 0.026943005621433258, 0.023710178211331367, 0.021437158808112144, 0.01386551558971405, -0.006134395487606525, -0.022667743265628815, 0.00028055888833478093, -0.01906496100127697, 0.010036132298409939, 0.019596301019191742, -0.019857170060276985, -0.03434828296303749, -0.0413062646985054, -0.00957939401268959, 0.05669061467051506, -0.01676144078373909, 0.039070162922143936, -0.02635014057159424, 0.033690690994262695, -0.024763504043221474, 0.014982335269451141, -0.011449509300291538, -0.011476950719952583, -0.02328478731215, 0.04392863065004349, -0.022091303020715714, -0.024317562580108643, 0.0460815504193306, 0.010189086198806763, -0.020688509568572044, 0.09157736599445343, 0.04244314506649971, 0.05135113745927811, 0.013262067921459675, -0.015856806188821793, 0.026810599491000175, -0.039803922176361084, -0.061889324337244034, 0.033589113503694534, -0.01803501509130001, 0.019316233694553375, 0.0012495944974943995, 0.0056156073696911335, 0.006527423858642578, 0.0072044567205011845, 0.020439451560378075, 0.05502596125006676, -0.06618617475032806, 0.02190524898469448, -0.023839138448238373, 0.0061934757977724075, -0.02996664308011532, -0.025421587750315666, -0.021813390776515007, 0.03513187915086746, -0.0007801505853421986, -0.012201911769807339, 0.05433588847517967, -0.0027726106345653534, -0.06555479764938354, 0.003743738168850541, 0.031761232763528824, 0.0010719686979427934, -0.0021861442364752293, -0.0225286316126585, 0.020037563517689705, 0.0304871816188097, -0.05985477566719055, 0.01137210987508297, -0.009344641119241714, 0.012415222823619843, -0.015948787331581116, 0.0028056504670530558, 0.004722740966826677, -0.02326682209968567, 0.02790231816470623, -0.03085877001285553, -0.00419170968234539, -0.011418008245527744, -0.006496660877019167, -0.0011385091347619891, 0.032572392374277115, 0.05500824749469757, -0.04246652498841286, -0.0036745830439031124, -0.02786565013229847, 0.0006057856953702867, -0.045819204300642014, 0.004780821967869997, 0.03504583612084389, 0.019412705674767494, -0.05530356988310814, -0.056122489273548126, -0.005363627802580595, 0.01906209997832775, 0.036343201994895935, -0.02910183183848858, -0.056856609880924225, -0.05945485830307007, 0.0021308939903974533, -0.006086528301239014, 0.03084203600883484, -0.03165048733353615, -0.005219379905611277, -0.030360901728272438, 0.009207499213516712, 0.018423613160848618, 0.046165719628334045, 0.007845650427043438, 0.01093469001352787, 0.031805530190467834, 0.016592226922512054, 0.018855033442378044, -0.0036353296600282192, 0.028560463339090347, -0.058793097734451294, -0.035464610904455185, 0.0014277278678491712, -0.014474868774414062, 0.027571553364396095, -0.0032052621245384216, 0.053420379757881165, -0.04518425464630127, 0.0008165740291588008, -0.06158119812607765, -0.0403512567281723, 0.009539863094687462, 0.008423703722655773, 0.006390836089849472, -0.026983365416526794, 0.06043078377842903, 0.03896765783429146, -0.02539057843387127, -0.006818810943514109, 0.009125792421400547], index=0, object='embedding')], model='voyage-large-2-instruct', object='list', usage=Usage(prompt_tokens=None, total_tokens=6), meta={'usage': {'credits_used': 2}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/anthropic/voyage-large-2.md # voyage-large-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `voyage-large-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Voyage’s most powerful generalist embedding model, outperforming popular competing models. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [voyage-large-2.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-3677b6dfb5c681eea77595a53a83546af9dec5bd%2Fvoyage-large-2.json?alt=media\&token=9019b597-7720-47b1-a5e6-d8de941e6af5) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="voyage-large-2"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "voyage-large-2", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[-0.01561100035905838, 0.0163817647844553, 0.01526249386370182, 0.028116634115576744, 0.026989677920937538, 0.0004941212828271091, -0.025271985679864883, -0.011951270513236523, -0.05632830783724785, 0.02140776626765728, -0.012444284744560719, 0.05962485820055008, -0.0359853059053421, -0.04648525267839432, -0.000635049189440906, -0.03689562901854515, 0.03731445223093033, -0.02737775258719921, -0.03597225993871689, 0.018330063670873642, 0.014466659165918827, 0.06400783360004425, -0.017127983272075653, -0.010960116051137447, 0.019682390615344048, -0.01969660073518753, -0.020545894280076027, -0.036266110837459564, 0.022564861923456192, 0.015046908520162106, 0.005086916033178568, -0.055712416768074036, -0.009372987784445286, -0.05443871021270752, 0.01082932110875845, 0.013921350240707397, 0.012792528606951237, 0.009856860153377056, 0.038214996457099915, -0.01748892292380333, 0.027313927188515663, 0.0010629575699567795, 0.009722599759697914, -0.03439014032483101, 0.04742538928985596, -0.03777078166604042, 0.028628166764974594, 0.009835371747612953, 0.027161121368408203, -0.004167856182903051, -0.0067236595787107944, 0.0004543469985947013, -0.017227914184331894, 0.02788742259144783, 0.007638351526111364, -0.010299152694642544, -0.02586738020181656, 0.0218382366001606, 0.04635853320360184, 0.026485133916139603, 0.025494210422039032, -0.04289240390062332, 0.03552595153450966, -0.037090834230184555, 0.022934680804610252, -0.027275260537862778, -0.031381506472826004, 0.002224462805315852, -0.03688165545463562, 0.018545765429735184, 0.016824407503008842, 0.018384769558906555, 0.018052050843834877, -0.016227442771196365, 0.03755158558487892, -0.04015543311834335, -0.02670223079621792, -0.016971217468380928, -0.030671043321490288, 0.0021956220734864473, 0.05178181082010269, 0.03051823377609253, -0.0486641600728035, 0.031219378113746643, -0.016983095556497574, 0.016476454213261604, -0.006486237049102783, -0.04223339259624481, -0.002773251850157976, -0.040842074900865555, -0.005939151160418987, 0.011872188188135624, 0.023673562332987785, -0.026831047609448433, 0.017583612352609634, -0.03638875484466553, -0.03278379514813423, -0.0002875628415495157, -0.027190471068024635, -0.03907779976725578, 0.010886580683290958, 0.014972136355936527, -0.01599508710205555, -0.013510212302207947, -0.0038126243744045496, 0.003719332395121455, 0.0034845301415771246, 0.05393882468342781, -0.00012415634409990162, -0.02310345135629177, 0.028389407321810722, 0.013234879821538925, 0.009557213634252548, 0.02266225963830948, 0.0019554041791707277, -0.04726326838135719, 0.026736386120319366, 0.036146845668554306, 0.02594052068889141, -0.03500591591000557, -0.028510767966508865, 0.007205901201814413, -0.008451773785054684, 0.03590983524918556, 0.020070932805538177, 0.05176317319273949, 0.0013383494224399328, 0.021569279953837395, -0.010956738144159317, -0.016569597646594048, -0.006681963801383972, -0.02134251594543457, -0.0062222592532634735, 0.011128997430205345, 0.055099327117204666, 0.009063849225640297, 0.001973456935957074, 0.003928495105355978, 0.03676518425345421, 0.031590916216373444, 0.022166915237903595, 0.00042000305256806314, -0.010268054902553558, -0.06810010224580765, 0.008095460012555122, -0.02306521311402321, -0.03842370584607124, -0.04987963289022446, 0.016797153279185295, -0.025161108002066612, -0.007720866706222296, 0.013069551438093185, -0.006471503525972366, -0.02884107641875744, -0.07532679289579391, 0.02133904956281185, 0.0034364573657512665, -0.022415226325392723, 0.0029414191376417875, 0.0408816784620285, -0.018296638503670692, 0.009214590303599834, 0.03463122993707657, 0.03310105577111244, -0.008069638162851334, -0.029624564573168755, 0.018412407487630844, -0.015460200607776642, -0.013632738031446934, 0.019308757036924362, -0.017245501279830933, 0.021540310233831406, 0.010338636115193367, 0.025030896067619324, -0.027800770476460457, -0.03200531378388405, 0.00026735541177913547, -0.02633092552423477, -0.0034100771881639957, -0.008583209477365017, 0.03241575136780739, 0.004901030566543341, -0.028110811486840248, 0.004195867106318474, -0.004088773392140865, -0.03378729522228241, -0.016172703355550766, -0.029049715027213097, 0.02661464549601078, -0.03551383689045906, 0.03374816104769707, 0.017315266653895378, 0.06481892615556717, 0.007677718531340361, -0.0103862714022398, -0.0010579639347270131, 0.02331646718084812, 0.0014224984915927052, -0.03910761699080467, 0.028060730546712875, -0.014717360027134418, -0.023774076253175735, 0.008702532388269901, 0.0062010763213038445, 0.015451756305992603, -0.0293868500739336, -0.007175386417657137, -0.027763966470956802, -0.029969895258545876, -0.009932857006788254, -0.03921965882182121, 0.01260571088641882, -0.006088872440159321, 0.01902841590344906, 0.04584793001413345, -0.016598979011178017, 0.03478660061955452, 0.025516806170344353, -0.0591328926384449, -0.037109408527612686, -0.003896715585142374, 0.03235984966158867, 0.017011746764183044, -0.009764849208295345, 0.026055971160531044, -0.028698517009615898, 0.01783863641321659, 0.0013551792362704873, -0.007793779950588942, -0.011981668882071972, -0.022837312892079353, 0.027087045833468437, -0.0017900174716487527, -0.010994940996170044, 0.02026287466287613, 0.02316156215965748, -0.019865714013576508, 0.03249611705541611, -0.023300394415855408, -0.01026636641472578, 0.01567048579454422, 0.004460398107767105, -0.010391512885689735, -0.01577414572238922, -0.009346083737909794, 0.018950730562210083, -0.027090540155768394, -0.028998075053095818, -0.05402175337076187, 0.013167792931199074, 0.011030813679099083, 0.007862847298383713, 0.005070377141237259, -0.0007595567731186748, 0.000793489336501807, -0.054297082126140594, 0.03416791558265686, -0.019880620762705803, 0.026419909670948982, 0.03829512745141983, -0.017762044444680214, 0.056693557649850845, 0.03424303978681564, -0.029434604570269585, -0.00021964665211271495, 0.017203951254487038, 0.01655646786093712, -0.005071541760116816, 0.0467720702290535, -0.015545864589512348, 0.0006547652301378548, -0.050357624888420105, 0.01593691110610962, -0.0007488397532142699, 0.027610691264271736, 0.03909363970160484, -0.011458663269877434, -0.00885865930467844, 0.015301277860999107, 0.006836168933659792, 0.00016492062422912568, -0.02690628543496132, 0.019307591021060944, -0.0017978211399167776, -0.013217873871326447, -0.012339754030108452, -0.04309319704771042, 0.02255988121032715, 0.01490074023604393, -0.01168653555214405, 0.02538635954260826, 0.03433889523148537, 0.01934812217950821, -0.005348564125597477, -0.003482433967292309, 0.021459246054291725, -0.044842567294836044, -0.028761964291334152, 0.01890944130718708, -0.02796804904937744, 0.02667241543531418, -0.03280522674322128, 0.011964897625148296, 0.025785384699702263, 0.04416098818182945, 0.025074340403079987, -0.005189845804125071, 0.01818249747157097, -0.011724213138222694, -0.001976419473066926, -0.01982984133064747, -0.007346960250288248, -0.00037185754626989365, -0.0014101527631282806, 0.027824996039271355, -0.024330683052539825, 0.030691364780068398, -0.03200438246130943, 0.03636080399155617, -0.01832389272749424, 0.018969597294926643, -0.01068789791315794, 0.035673633217811584, 0.006694891955703497, -0.007840833626687527, 0.013054301030933857, -0.00038938617217354476, -0.024205360561609268, -0.016925793141126633, -0.025757430121302605, -0.00016538648924324661, 0.030299270525574684, 0.01302721444517374, 0.03834078088402748, -0.009825821034610271, 0.008132764138281345, -0.0070003909058868885, -0.03716117888689041, -0.01021552737802267, 0.03126503527164459, 0.015670953318476677, -0.005347385071218014, -0.039390869438648224, 0.01486055925488472, -0.03582131490111351, -0.00987287424504757, -0.028576457872986794, -0.002587163122370839, 0.04089140146970749, -0.024546077474951744, 0.0215337872505188, 0.03409477695822716, -0.0509054958820343, 0.015685511752963066, -0.03386649489402771, -0.0007727887714281678, 0.03762705624103546, -0.007396445143967867, -0.04039902612566948, -0.00019748836348298937, -0.01580815389752388, 0.021774180233478546, 0.011872421018779278, 0.0386585108935833, 0.04261939972639084, 0.015379931777715683, -0.04090264067053795, -0.008455093018710613, -0.024551915004849434, 0.004770934581756592, -0.016702231019735336, 0.055900633335113525, -0.012855888344347477, -0.021779071539640427, 0.005040240939706564, 0.012406257912516594, 0.03936943784356117, 0.007326621096581221, 0.006895129568874836, -0.03918774425983429, -0.008432061411440372, 0.020185595378279686, -0.01102100033313036, 0.0332203209400177, 0.0013226697919890285, 0.020309926941990852, 0.02376347780227661, -0.004147939849644899, -0.03156575933098793, -0.023394037038087845, 0.02575230784714222, 0.020198581740260124, 0.03844199329614639, -0.007633867673575878, 0.023998277261853218, 0.01781049557030201, 0.005160058382898569, 0.02790699154138565, 0.030801022425293922, 0.029096726328134537, -0.031985048204660416, -0.03073812648653984, -0.006274670362472534, 0.027680573984980583, -0.0036972109228372574, -0.02271956205368042, 0.01919100433588028, 0.016859522089362144, -0.0200082715600729, -0.026645395904779434, -0.023417096585035324, -0.0002672389382496476, 0.025659596547484398, -0.048059917986392975, 0.010213663801550865, -0.007070854771882296, 0.00754858274012804, -0.03601279482245445, -0.012392805889248848, 0.017474597319960594, -0.0024901730939745903, -0.02173277549445629, -0.030866242945194244, -0.02285010926425457, 0.007194836623966694, 0.02264164201915264, 0.04671353101730347, 0.032564714550971985, 0.041126202791929245, 0.044610559940338135, -0.006376930512487888, -0.015022742561995983, -0.064248226583004, -0.024678168818354607, -0.022263817489147186, 0.021715711802244186, 0.009394215419888496, -0.05724422261118889, -0.015287650749087334, 0.038339849561452866, 0.021717803552746773, -0.016180330887436867, -0.017088385298848152, 0.012107222341001034, -0.0030675700400024652, 0.007209511939436197, 0.0006611965945921838, -0.03068828023970127, -0.034486111253499985, 0.02214944362640381, -0.017202990129590034, 0.05423186346888542, 0.015475633554160595, -0.043653182685375214, 0.02030780166387558, -0.02637891285121441, 0.01138138584792614, 0.002471268642693758, -0.004868681076914072, -0.002345241606235504, 0.029836654663085938, 0.014773090369999409, 0.03010406717658043, 0.034129250794649124, 0.013569321483373642, -0.033810123801231384, 0.014772041700780392, 0.028001097962260246, 0.012205523438751698, 0.007632703520357609, 0.0035236638505011797, -0.019178077578544617, 0.03518492728471756, 0.03753061965107918, 0.042594242841005325, 0.032781932502985, 0.02036886103451252, 0.008273866958916187, 0.021707559004426003, 0.008643656969070435, 0.014840059913694859, -0.015387418679893017, 0.026205169036984444, 0.01748245768249035, 0.004658002872020006, 0.00884037371724844, -0.010825593955814838, -0.018101435154676437, 0.012904658913612366, -0.020081879571080208, 0.038989633321762085, -0.03265940770506859, 0.003149098716676235, -0.05136508494615555, -0.05189688131213188, -0.010399200022220612, 0.023441554978489876, -0.017339957877993584, -0.0056126001290977, -0.02619069814682007, 0.013750197365880013, 0.03338198363780975, 0.026715507730841637, 0.014513479545712471, 0.03035750426352024, -0.04307875409722328, 0.007957768626511097, 0.042994897812604904, -1.7907164874486625e-05, 9.760132525116205e-05, 0.030064964666962624, 0.013544280081987381, -0.020923487842082977, 0.006907098926603794, 0.018497196957468987, -0.01779651828110218, 0.0320206880569458, 0.005771551746875048, 0.03581707924604416, -0.024985238909721375, 0.0005362685769796371, 0.03140293434262276, 0.01062771212309599, 0.004155976232141256, 0.0057830424048006535, 0.013424433767795563, -0.028928659856319427, -0.031268760561943054, -0.022247424349188805, -0.025858061388134956, 0.007437849882990122, 0.04363291338086128, -0.008121845312416553, 0.03824015334248543, -0.006650400348007679, -0.02219579927623272, -0.014472715556621552, 0.015444652177393436, 0.05621365085244179, -0.023035310208797455, 0.019451221451163292, -0.0018785379361361265, 0.010490860790014267, -0.013677477836608887, -0.04726792499423027, -0.0061452435329556465, 0.034233249723911285, -0.023927465081214905, -0.026708286255598068, -0.02027871273458004, 0.01737629622220993, -0.03334936872124672, -0.040836021304130554, 0.00013164312986191362, -0.005370518658310175, 0.0038786628283560276, 0.03575050085783005, 0.03614649921655655, -0.01838250458240509, -0.03133939951658249, -0.029711799696087837, -0.03175513818860054, -0.03981027379631996, 0.01897425577044487, -0.0026607715990394354, 0.01913393661379814, 0.019708478823304176, -0.045819979161024094, 0.056935813277959824, -0.047368086874485016, -0.008553043939173222, -0.008238052949309349, -0.01054164208471775, -0.009270364418625832, -0.009780165739357471, 0.0124081801623106, 0.01522138062864542, 0.01144949160516262, -0.004644943866878748, -0.0033324214164167643, -0.0036685520317405462, 0.008332101628184319, -0.009887607768177986, 0.004527134820818901, -0.0008404428954236209, -0.018239103257656097, -0.028126884251832962, 0.039549268782138824, -0.008352513425052166, 0.023325318470597267, 0.024531709030270576, 0.020384175702929497, -0.01666443608701229, -0.053531184792518616, 0.03832937031984329, 0.03291098028421402, -0.02810918167233467, -0.030703186988830566, 0.007889866828918457, 0.006861967034637928, -0.0076120877638459206, 0.0024017728865146637, 0.0002292189747095108, -0.010531130246818066, 0.018540291115641594, -0.045134205371141434, 0.005439235363155603, 0.03172532096505165, 0.041210584342479706, 0.06814202666282654, -0.04373843967914581, -0.006974651012569666, 0.024038344621658325, 0.029075060039758682, -0.0167943574488163, 0.021829618141055107, 0.00394411850720644, 0.020799797028303146, 0.06064978986978531, -0.003670648206025362, 0.003946331329643726, -0.0728440135717392, -0.013736164197325706, -0.02894776128232479, 0.014498338103294373, 0.01772523857653141, -0.014983084052801132, 0.016463642939925194, 0.010431286878883839, 0.02997082658112049, 0.006815845612436533, 0.007061842828989029, -0.03320587798953056, -0.0188938919454813, 0.01319132000207901, -0.016422180458903313, 0.011988657526671886, 0.0039121476002037525, -0.01755461096763611, 0.0009688649442978203, 0.04124506190419197, -0.02731555886566639, 0.04081086441874504, -0.022731557488441467, 0.007542264647781849, 0.04177523031830788, 0.00903429463505745, 0.008090428076684475, -0.051856815814971924, 0.028538255020976067, 0.028897447511553764, 0.03172322362661362, 0.027290865778923035, 0.030006233602762222, -0.002491672756150365, 0.0019487362587824464, -0.02276463620364666, 0.008652334101498127, 0.031185369938611984, -0.022183001041412354, 0.04328351095318794, -0.0011171303922310472, -0.017360223457217216, -0.012476342730224133, 0.016625331714749336, -0.036035384982824326, -0.014988209120929241, 0.03618562966585159, 0.011506612412631512, -0.01594710163772106, 0.00192276353482157, -0.08429400622844696, -0.0052228644490242004, -0.020355582237243652, 0.058243997395038605, 0.0016532825538888574, 0.04175543040037155, -0.016341961920261383, 0.02316010743379593, -0.028642145916819572, -0.012513410300016403, 0.014522098004817963, -0.004431193228811026, 0.04676058515906334, 0.053287994116544724, -0.031150661408901215, -0.00144620006904006, -0.00959492102265358, -0.03911739960312843, -0.04584001004695892, 0.036000680178403854, -0.02693936415016651, 0.0526469461619854, -0.009305508807301521, 0.023651665076613426, 0.025636302307248116, -0.020257866010069847, -0.02172805927693844, -0.015569856390357018, 0.010018519125878811, -0.024375639855861664, 0.005361608695238829, 0.027821268886327744, -0.007882500998675823, 0.022768478840589523, 0.038876309990882874, 0.007413100451231003, -0.01447749137878418, -0.027937505394220352, -0.017782075330615044, -0.032050970941782, -0.012589843012392521, 0.015063564293086529, -0.004670071881264448, -0.032362643629312515, 0.008738463744521141, 0.007616048213094473, -0.00874259788542986, -0.008126940578222275, 0.01282676961272955, -0.019974729046225548, 0.04441675543785095, 0.018706245347857475, 0.004110465757548809, -0.03736383467912674, 0.0018351930193603039, -0.003515685675665736, 0.0026709481608122587, 0.01913553848862648, 0.022398455068469048, 0.017450779676437378, 0.0069632078520953655, 0.037930577993392944, -0.010789867490530014, 0.007281198166310787, 0.008048629388213158, -0.006295373197644949, 0.02293700911104679, -0.011130306869745255, -0.03487744927406311, -0.012287983670830727, -0.02220558188855648, -0.04921925067901611, 0.029961511492729187, 0.002944651059806347, -0.014541315846145153, 0.026400459930300713, -0.0009273290634155273, 0.01889878325164318, 0.014154056087136269, 0.005410525947809219, -0.00025832903338596225, -0.026302043348550797, -0.03285600617527962, 0.045477092266082764, -0.006668744143098593, 0.007745023351162672, 0.014358838088810444, -0.010156244039535522, 0.036752138286828995, -0.02216598205268383, 0.02845672518014908, -0.012960316613316536, -0.018919631838798523, 0.016040479764342308, 0.0201636403799057, -0.024941328912973404, 0.0013182001421228051, -0.012210181914269924, 0.0075933365151286125, -0.0007104630931280553, 0.029780283570289612, 0.012326651252806187, -0.03758512809872627, 0.009976066648960114, 0.030073320493102074, -0.0337989442050457, 0.01598442904651165, 0.012692218646407127, -0.025265930220484734, -0.03963405638933182, -0.025179743766784668, -0.02215573377907276, -0.0525435209274292, -0.06272294372320175, -0.00205032667145133, 0.018734971061348915, 0.02867848426103592, -0.014052028767764568, -0.0374528206884861, 0.019475307315587997, 0.028859011828899384, 0.014578585512936115, 0.001146174967288971, 0.006506036501377821, -0.0013879216276109219, 0.017282655462622643, -0.01845378428697586, -0.023963971063494682, -0.003730571595951915, 0.045663442462682724, -0.02265879325568676, -0.009229323826730251, -0.02306908741593361, 0.02576884627342224, -0.009896313771605492, 0.017938552424311638, -0.011646353639662266, -0.03968716785311699, -0.018215343356132507, -0.017861507833003998, -0.04468882828950882, -0.04019404202699661, 0.04563269391655922, -0.06937101483345032, 0.022027617320418358, -0.006465214304625988, 0.018402274698019028, 0.01556362584233284, -0.01345465611666441, -0.019538434222340584, 0.010059196501970291, -0.01101543940603733, -0.0103528443723917, 0.01600734516978264, -0.030400831252336502, 0.01622581295669079, 0.03166522458195686, -0.02088761515915394, -0.006096210330724716, -0.00903313048183918, 0.009699306450784206, 0.019045362249016762, 0.014117498882114887, -0.0016533990856260061, 0.027659375220537186, 0.033881403505802155, -0.03541530668735504, -0.012575428932905197, 0.0023869231808930635, 0.04531869292259216, 0.019648730754852295, -0.027894411236047745, 0.06315154582262039, -0.019919056445360184, -0.0055879089049994946, -0.023773841559886932, -0.0033724866807460785, 0.004125345032662153, -0.035242464393377304, 0.008068764582276344, -0.03914441913366318, -0.047504592686891556, -0.032891646027565, 0.0029986202716827393, -0.003852922935038805, -0.015390843152999878, 0.017134739086031914, 0.006639452651143074, 0.013510911725461483, -0.022561978548765182, -0.0027690590359270573, -0.0011980675626546144, 0.002105649560689926, -0.012595928274095058, 0.014941154979169369, -0.020060217007994652, 0.024062570184469223, 0.003997694235295057, -0.0127552580088377, 0.02723286673426628, 0.014107700437307358, -0.002023422159254551, -0.022545672953128815, 0.013797863386571407, -0.017208987846970558, -0.025878557935357094, -0.006478957831859589, 0.06371153891086578, 0.04815215989947319, 0.053829342126846313, 0.024223238229751587, -0.016742732375860214, 0.06270337104797363, 0.025599149987101555, -0.008966393768787384, -0.023493267595767975, -0.021201150491833687, 0.0006118244491517544, -0.002047706162557006, 0.022668199613690376, 0.01345378439873457, -0.005134493578225374, 0.02617858536541462, 0.03701675683259964, -0.02243642508983612, 0.013639959506690502, -0.037418343126773834, -0.0016963762464001775, -0.018731489777565002, -0.027977803722023964, -0.01699753850698471, -0.00806285347789526, -0.0157331470400095, -0.031534310430288315, -0.0010882314527407289, 0.02825406938791275, -0.005617797840386629, -0.0065036495216190815, 0.018883876502513885, -0.013767377473413944, -0.017329737544059753, -0.014273304492235184, -0.01020819041877985, -0.024503173306584358, 0.03376912325620651, -0.0016771587543189526, 0.01848950982093811, 0.002620939165353775, -0.01345686987042427, 0.04883839562535286, -0.039051711559295654, -0.03748450055718422, -0.010618103668093681, -0.023755555972456932, -0.043276287615299225, -0.022812386974692345, 0.04128419607877731, 0.00019113349844701588, 0.04408225417137146, 0.009888540022075176, 0.000775161839555949, -0.004219532012939453, 0.02930462174117565, -0.0001595630164956674, 0.047956958413124084, -0.009022107347846031, 0.009663695469498634, 0.01349871139973402, 0.01334175281226635, -0.006239263340830803, 0.02628760039806366, 0.00806177593767643, 0.005327803548425436, -0.029840383678674698, -0.018596196547150612, -0.03663753345608711, 0.018667593598365784, 0.019062306731939316, 0.013389318250119686, 0.015211888588964939, -0.007939250208437443, 0.006764511112123728, 0.01986221969127655, -0.021661903709173203, 0.020057305693626404, -0.008008782751858234, 0.04586470127105713, 0.025284158065915108, 0.035667579621076584, 0.0036868376191705465, 0.015757033601403236, -0.00721711153164506, -0.0368378609418869, 0.0027331863529980183, -0.01010651234537363, 0.001044904813170433, 0.008318241685628891, -0.014558088034391403, -0.013355758972465992, 0.0032401776406913996, 0.014596031978726387, 0.006481752265244722, 0.028111977502703667, 0.009407085366547108, 0.03312341868877411, -0.021547282114624977, 0.018333012238144875, -0.01218257937580347, 0.04624246805906296, 0.014572354964911938, 0.005895446054637432, -0.024843614548444748, 0.03790052607655525, -0.021420113742351532, -0.02253495715558529, 0.04973427578806877, 0.005535614211112261, 0.01939517632126808, -0.0038872812874615192, 0.010220885276794434, -0.026501905173063278, 0.045801810920238495, -0.03629278391599655, -0.008702532388269901, -0.04062218219041824, -0.018621936440467834, -0.029899081215262413, -0.03611528500914574, -0.001091667334549129, 0.028605224564671516, 0.011706801131367683, -0.06163465604186058, 2.5390319933649153e-05, 0.023150848224759102, 0.04028348997235298, 0.009031616151332855, -0.051419828087091446, 0.03683879226446152, -0.0326903872191906, 0.019212786108255386, 0.021576182916760445, 0.006947805173695087, -0.023350825533270836, 0.015494268387556076, -0.009872582741081715, -0.037689484655857086, 3.377611210453324e-05, -0.01628958061337471, 0.02422376349568367, 0.011166267096996307, 0.0340057909488678, -0.019733870401978493, -0.011461982503533363, 0.004578876309096813, 0.03377704694867134, -0.05956150218844414, 0.010628760792315006, 0.01259208470582962, -0.028173238039016724, 0.004983927588909864, -0.05542497709393501, 0.023941442370414734, 0.015258143655955791, -0.024078410118818283, -0.027858072891831398, -0.022866196930408478, -0.010721179656684399, 0.0008918059174902737, -7.392892439384013e-05, 8.752672874834388e-05, 0.038084547966718674, 0.032564133405685425, 0.011673957109451294, 0.01472498755902052, 0.02389485388994217, 0.00960010290145874, 0.0109937759116292, -0.03255528211593628, -0.05041073262691498, -0.017919976264238358, -0.01058477908372879, 0.019684135913848877, 0.0011487372685223818, 0.02799253724515438, 0.02977399341762066, -0.03532189503312111, 0.004375171847641468, -0.01822771690785885, -0.03516349568963051, -0.02172875590622425, 0.03417746722698212, 0.020600633695721626, -0.0024799820967018604, -0.01565779186785221, 0.019945988431572914, 0.003030882216989994, -0.034521520137786865, -0.023360492661595345, -0.014217297546565533, -0.01676599681377411, 0.03019724413752556, 0.06373017281293869, -0.009890462271869183, 0.0018854058580473065, 0.014999543316662312, -0.017975181341171265, 0.015541205182671547, 0.011049448512494564, -0.05610375478863716, -0.021746985614299774, -0.011690028943121433, -0.007622118573635817, 0.008191464468836784, -0.05543498694896698, -0.026299478486180305, -0.028984099626541138, -0.017471859231591225, -8.391617302550003e-05, -0.01435636356472969, 0.024609975516796112, 0.02546672336757183, 0.026995735242962837, -0.006338262464851141, 0.001575606525875628, -0.024756494909524918, 0.012681999243795872, -0.03830350935459137, 0.018343575298786163, -0.02948654815554619, -0.027943793684244156, 0.019458303228020668, 0.009602433070540428, 0.016136130318045616, -0.01348575484007597, 0.01825813017785549, 0.017384275794029236, -0.0003169276751577854, 0.017263786867260933, 0.032837603241205215, -0.009313443675637245, 0.03288698568940163, 0.01021180022507906, 0.035332147032022476, 0.025198612362146378, -0.016384443268179893, 0.036642659455537796, -0.013293929398059845, 0.013863727450370789, 0.016344845294952393, 0.018475068733096123, -0.0002906492736656219, -0.007840251550078392, 0.018346253782510757, -0.01792975887656212, -0.023469042032957077, -0.04118729010224342, 0.03813113644719124, 0.028544776141643524, -0.010191068984568119, -0.040393438190221786, -0.03620612993836403, -0.029817089438438416, -0.03193590044975281, -0.02262871526181698, -0.007936688140034676, -0.0004754279216285795, -0.021273594349622726, -0.0007067942642606795, -0.02617795206606388, -0.023912090808153152, 0.005262376740574837, 0.008055894635617733, -0.004182644188404083, -0.02018553763628006, 0.03565731272101402, 0.035227786749601364, 0.018733106553554535, 0.028883470222353935, 0.004832546692341566, -0.008453494869172573, -0.0037667355500161648, 0.0500589981675148, -0.0398828387260437, -0.021579908207058907, 0.016212068498134613, 0.0052460129372775555, 0.0071945455856621265, 0.02285350114107132, 0.008869257755577564, 0.0116716418415308, -0.026226453483104706, -0.017740612849593163, 0.011699113994836807, -0.05042319372296333, -0.007674806751310825, -0.010096263140439987, 0.0254192054271698, -0.014626921154558659, 0.02581986039876938, -0.0014779962366446853, 0.04758192598819733, 0.01762600615620613, -0.0396694652736187, 0.004964914172887802, -0.01010680291801691, 0.01680326648056507, 0.0026360510382801294, 0.009402979165315628, 0.02683034911751747, -0.01728125661611557, 0.026732511818408966, 0.04476930573582649, -0.022035304456949234, -0.0026686624623835087, -0.04830263927578926, -0.018759019672870636, 0.049695149064064026, 0.008410485461354256, 0.014944417402148247, 0.021057194098830223, -0.002655734308063984, -0.011493313126266003, -0.05742404982447624, -0.027233505621552467, -0.025073057040572166, 0.006897956132888794, -0.014840759336948395, -0.006567823234945536, 0.021026913076639175, 0.021548695862293243, -0.04565226286649704, -0.04065246507525444, -0.002003258327022195, -0.02641427516937256, -0.0138088408857584, -0.018532603979110718, 0.003003977704793215, 0.032578110694885254, -0.012093013152480125, -0.015617376193404198, -0.006373319774866104, -0.017388643696904182, -0.0760996863245964, 0.024051155894994736, 0.05450975522398949, 0.007753248792141676, -0.022590396925807, -0.017270192503929138, 0.005862135905772448, 0.005667049903422594, 0.056552864611148834, -0.018218137323856354, 0.002630227478221059, -0.018802523612976074, 0.008255202323198318, 0.010540069080889225, -0.0258971955627203, 0.024512607604265213, -0.0323006808757782, 0.01870104856789112, 0.056824006140232086, 0.005657848436385393, 0.02999904192984104, -0.006054659839719534, 0.034501250833272934, 0.02838684618473053, -0.03185972571372986, 0.02039937488734722, 0.0109031330794096, -0.009345181286334991, -0.010869313962757587, 0.020233290269970894, -0.005056007765233517, 0.01182629819959402, -0.05112538859248161, 0.01783483661711216, -0.030273182317614555, -0.02501854859292507, 0.028292270377278328, 0.002678678836673498, 0.040275104343891144, 0.0176090020686388, 0.04298534616827965, 0.048773642629384995, -0.011546771973371506, 0.0032369601540267467, -0.04217961058020592, 0.003907430451363325, 0.007474770303815603, 0.0020578971598297358, -0.00793265551328659, 0.03525387495756149, 0.040193110704422, 0.028112908825278282, 0.00277569773606956, -0.013180254958570004, 0.052939049899578094, -0.016773218289017677, -0.011929665692150593, -0.01788666658103466, -0.03832004964351654, -0.0014743274077773094, 0.0038443042431026697, -0.020997095853090286, 0.053711939603090286, 0.034125056117773056, 0.005273630376905203, 0.030854597687721252, -0.009823258966207504, -0.014736343175172806, 0.013467120006680489, -0.003143042093142867, 0.03621765971183777, 0.016900168731808662, -0.02626989781856537, -0.04309646040201187, -0.03412901610136032, 0.026982691138982773, 0.044919900596141815, 0.04545542970299721, 0.034214042127132416, 0.03967505320906639, 0.020239580422639847, 0.02918495237827301, 0.000920457358006388, -0.02767312154173851, -0.004076310899108648, 0.02420617640018463, -0.0010777055285871029, 0.03642229735851288, -0.009551186114549637, 0.027132702991366386, -0.030304860323667526, -0.009744525887072086, 0.00902730692178011, -0.002761124400421977, -0.013965346850454807, -0.023005494847893715, 0.03726647049188614, -0.009159150533378124, -0.012412722222507, 0.046041734516620636, 0.0025397890713065863, -0.04388844966888428, 0.013942473568022251, 0.012532802298665047, -0.028174404054880142, 0.0011878127697855234, 0.03414705768227577, -0.011002627201378345, -0.05322370305657387, 0.0002616484125610441, -0.008412198163568974, -0.026623496785759926, -0.014190278016030788, 0.013136841356754303, -0.02031470276415348, 0.011970721185207367, -0.014818396419286728, 0.00402138102799654, 0.03894828259944916, -0.009349752217531204, -0.033922865986824036, -0.04669489711523056, 0.03316441550850868, -0.012412605807185173, -0.0069870552979409695, 0.0050138020887970924, -0.025574108585715294, 0.03725380450487137, -0.02563956379890442, 0.010128132067620754, 0.0067054033279418945, 0.028493063524365425, 0.03667387366294861, -0.018769269809126854, 0.011673374101519585, -0.009125039912760258, 0.043242745101451874, -0.022041214630007744, -0.013873131014406681, 0.0040014213882386684, 0.014715437777340412, 0.009864255785942078, -0.005650743842124939, 0.005212542600929737, 0.009848241694271564, -0.004596463404595852, 0.014585982076823711, 0.0010605991119518876, 0.01963021233677864, -0.05266837775707245, -0.0075400518253445625, 0.01891998015344143, 0.002873386489227414, -0.022146182134747505, -0.008001561276614666, 0.018102599307894707, 0.012984353117644787, -0.01275374460965395, 0.04734339937567711, -0.01901082694530487, -0.012095343321561813, -0.02884483151137829, 0.027450384572148323, 0.03602537140250206, -0.02632300741970539, 0.03891846910119057, -0.024301914498209953, 0.02443387359380722, 0.015063680708408356, 0.008245797827839851, -0.017457766458392143, 0.0336039736866951, -0.0016059959307312965, -0.04229841008782387, 0.04986065253615379, -0.012155674397945404, -0.030027665197849274, -0.040266718715429306, -0.04476430267095566, -0.019847078248858452, 0.042488954961299896, 0.0393955260515213, 0.024377036839723587, -0.015419845469295979, -0.04206034541130066, -0.039582811295986176, -0.03152010217308998, 0.002762973541393876, 0.0037384917959570885, 0.006927480921149254, 0.01287219300866127, -0.028545474633574486, -0.005230405833572149, 0.006820212583988905, 0.02025914564728737, 0.01904739998281002, -0.018458763137459755, 0.03272649273276329, 0.009633413515985012, -0.02492496743798256, -0.019173884764313698, 0.0007094148313626647, 0.019329721108078957, 0.03748822584748268, 0.006577199324965477, -0.008009830489754677, -0.022320186719298363, 0.013312098570168018, 0.03799416869878769, 0.04872891679406166, 0.0032725415658205748, 0.008341069333255291, -0.035052619874477386, -0.011916212737560272, -0.009421905502676964, 0.003632912179455161, -0.05392717942595482, 0.058800019323825836, -0.01611330360174179, 0.010717393830418587, 0.004583651665598154, 0.005885954014956951, -0.006185091100633144, -0.004404172301292419, -0.041753336787223816, 0.01793057471513748, -0.01214204728603363, -0.01298511028289795, 0.007763760630041361, -0.028712956234812737, -0.03476051241159439, -0.010070872493088245, -0.06982570886611938, -0.01954542100429535, -0.01183934323489666, 0.022942250594496727, -0.019640810787677765, 0.004966544918715954, -0.0008832162711769342, 0.019450616091489792, -0.010295076295733452, -0.010681492276489735, 0.02338704839348793, -0.02349047362804413, -0.0360584482550621, -0.018022002652287483, 0.012867884710431099, -0.015124739147722721, 0.006681992206722498, -0.03926344960927963, 0.004456699825823307, 0.029135974124073982, 0.00453575374558568, -0.01049045380204916, 0.04586376994848251, -0.002237143460661173, 0.028558986261487007, 0.010980673134326935, -0.005637721158564091, 0.006188715808093548, -0.0007730653742328286, -0.003796042175963521, -0.010051538236439228, -0.014218376018106937, -0.023889146745204926, -0.02719489485025406, 0.02170284278690815, -0.00012747570872306824, -0.023174140602350235, -0.02770107239484787, -0.028250807896256447, 0.051525577902793884, 0.021920697763562202, 0.04455465450882912, -0.03763171657919884, 0.00633666105568409, -0.01780344732105732, 0.024745546281337738, -0.002253914950415492, 0.012576362118124962, -0.012260496616363525, 0.008785051293671131, 0.035455137491226196, -0.014336329884827137, -0.03576727584004402, -0.03551756218075752, 0.028417127206921577, 0.015258068218827248, 0.0033607236109673977, -0.016561128199100494, 0.02958414889872074, 0.009740099310874939, 0.010442876257002354, 3.62146929546725e-05, 0.047732871025800705, -0.05051788315176964, -0.02551540732383728, -0.012026626616716385, 0.017299311235547066, 0.04242291674017906, 0.004559309687465429, -0.036336109042167664, -0.036769844591617584, -0.011859143152832985, 0.029975950717926025, -0.00475800596177578, 0.023865504190325737, -0.00578354811295867, 0.03891264647245407, 0.019545886665582657, -0.029897218570113182, 0.05478392541408539, -0.022606847807765007, 0.013859832659363747, 0.004012602381408215, -0.00411069905385375, 0.010832581669092178, -0.004449653439223766, 0.018642086535692215, 0.012014280073344707, -0.016844380646944046, 0.021052157506346703, -0.030187925323843956], index=0, object='embedding')], model='voyage-large-2', object='list', usage=Usage(prompt_tokens=None, total_tokens=6), meta={'usage': {'credits_used': 2}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/anthropic/voyage-law-2.md # voyage-law-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `voyage-law-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview This model leads the MTEB leaderboard for legal retrieval by a significant margin, outperforming OpenAI v3 large by an average of 6% across eight legal retrieval datasets and by over 10% on three key benchmarks (LeCaRDv2, LegalQuAD, and GerDaLIR). With a 16K context length and training on extensive long-context legal documents, voyage-law-2 excels in retrieving information across lengthy texts. Notably, it also matches or surpasses performance on general-purpose corpora across domains. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [voyage-law-2.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-1e2f1017f077054fa361a442474bde8bccafcd62%2Fvoyage-law-2.json?alt=media\&token=ac5bd267-e25a-4c01-9380-073d2544341a) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="voyage-law-2"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "voyage-law-2", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.001145921996794641, -0.014177074655890465, 0.03125114366412163, -0.004943643696606159, 0.040808435529470444, -0.04112643748521805, -0.002148799132555723, 0.022367684170603752, -0.005523522850126028, 0.03991401568055153, 0.0011472610058262944, -0.04464377462863922, -0.008764686062932014, 0.007735587190836668, -0.04927933216094971, -0.07220178842544556, 0.014300036244094372, -0.01632966846227646, -0.006595244165509939, -0.010660311207175255, -0.029766302555799484, -0.006220588460564613, -0.010902494192123413, -0.0033697024919092655, 0.03782792389392853, -0.025107743218541145, -0.019067829474806786, -0.024837607517838478, -0.03289186581969261, -0.019047968089580536, 0.029241429641842842, 0.03866409882903099, -0.004571191035211086, 0.0077200778760015965, -0.02732025273144245, -0.011416708119213581, 0.0006872184458188713, 0.004462233744561672, 0.004217428155243397, -0.0007986027630977333, -0.05442861467599869, 0.022397488355636597, -0.005475934129208326, -0.037046920508146286, 0.010534225963056087, 0.015138332732021809, -0.05938263610005379, -0.048910342156887054, 0.022915763780474663, -0.004510896280407906, -0.006746992468833923, -0.0086869141086936, -0.04371240735054016, -0.030562087893486023, -0.055399250239133835, -0.018822578713297844, 0.03140883520245552, 0.031007511541247368, 0.017470723018050194, 0.04167608171701431, 0.07282128185033798, 0.0258940439671278, -0.02182808518409729, -0.02368309535086155, 0.016176050528883934, -0.04326698184013367, -0.023447105661034584, 0.020617222413420677, -0.039187632501125336, 0.005500593222677708, 0.014478061348199844, 0.01988079771399498, 0.03360508382320404, 0.017932424321770668, 0.014319449663162231, 0.008472152054309845, -0.014332504943013191, 0.02586214616894722, -0.010057111270725727, -0.013000803999602795, -0.06030115857720375, 0.008836403489112854, -0.04745791107416153, -0.0003351849736645818, 0.0237923301756382, 0.0034919935278594494, -0.021912749856710434, -0.03563828393816948, 0.009686946868896484, 0.0323210284113884, 0.05512520670890808, -0.049066439270973206, 0.04279444366693497, -0.02676391787827015, -0.02919680066406727, -0.02514258213341236, -0.0061471969820559025, -0.01868845894932747, 0.011692197993397713, 0.040878452360630035, -0.031926482915878296, -0.008131025359034538, 0.009284088388085365, 0.012242062017321587, -0.004513811320066452, -0.03036213479936123, 0.01390973012894392, -0.018667036667466164, -0.05612540617585182, 0.04489113390445709, 0.024045953527092934, -0.032889802008867264, -0.01676337793469429, 0.08109244704246521, 0.025806788355112076, 0.038225702941417694, -0.005312497727572918, 0.03547114133834839, -0.021163517609238625, -0.013856060802936554, 0.03299630433320999, -0.042760856449604034, -0.016620444133877754, 0.008736232295632362, 0.017832087352871895, 0.04396145045757294, 0.023309526965022087, -0.0372849777340889, 0.017267437651753426, 0.030474606901407242, -0.019216230139136314, 0.05698663368821144, 0.02362808585166931, -0.051865749061107635, 0.04558917507529259, -0.014644593000411987, -0.0513453409075737, 0.015795020386576653, -0.008818913251161575, 0.016578935086727142, 0.007790931034833193, 0.015594187192618847, 0.045005835592746735, -0.003380358451977372, 0.02321702614426613, -0.018220271915197372, -0.004278629552572966, -0.0014467405853793025, -0.0013903927756473422, -0.004719369113445282, -0.021359004080295563, 0.02700124680995941, 0.022842232137918472, -0.04729723557829857, -0.027205772697925568, -0.0015674695605412126, 0.005131990183144808, -0.020173583179712296, 0.0022734194062650204, 0.0478912852704525, -0.0037952112033963203, -0.01200149580836296, 0.022421687841415405, -0.014169319532811642, -0.020144572481513023, -0.0017975465161725879, 0.08082287013530731, 0.04061228036880493, -0.026470687240362167, -0.003165121655911207, -0.0149068059399724, -0.02975313365459442, 0.03477265313267708, -0.012908861972391605, -0.01772407814860344, -0.04324979707598686, -0.03684401512145996, 0.03385474160313606, -0.02443435974419117, 0.018713898956775665, -0.048338938504457474, -0.0035805876832455397, 0.04770606383681297, -0.0052773780189454556, 0.0648009404540062, 0.024685973301529884, 0.004514229949563742, 0.022826053202152252, 0.030599577352404594, 0.002245203824713826, 0.019117817282676697, -0.02834968827664852, 0.013400258496403694, -0.0006651815492659807, -0.010280939750373363, -0.021007083356380463, 0.0011670105159282684, 0.040681906044483185, 0.02264975756406784, -0.006151492241770029, 0.04752608388662338, 0.0045180232264101505, -0.046775933355093, -0.04549678787589073, -0.017195971682667732, 0.016514107584953308, 0.029807137325406075, 0.03702923655509949, -0.03762640804052353, -0.014497531577944756, 0.04359368607401848, 0.019200609996914864, -0.013383966870605946, 0.02121027000248432, -0.003820037702098489, 0.03453565761446953, -0.00039186739013530314, 0.016617320477962494, -0.074774369597435, -0.02964133210480213, 0.024574726819992065, 0.0033100773580372334, -0.022461123764514923, 0.006639764178544283, -0.016722315922379494, -0.00557551858946681, -0.050562500953674316, -0.0009685105760581791, 0.0719875618815422, 0.029112668707966805, 0.028127865865826607, -0.018853290006518364, 0.039433106780052185, -0.0413379929959774, 0.044096238911151886, 0.04763464629650116, 0.03127128630876541, 0.012388900853693485, 0.028565704822540283, 0.020139440894126892, -0.029545597732067108, 0.004291349556297064, -0.05138104408979416, 0.060420773923397064, 0.009102324955165386, -0.03176669776439667, -0.052843183279037476, 0.0010598942171782255, -0.015904327854514122, -0.0012955466518178582, -0.028982121497392654, 0.039887458086013794, 0.0076407166197896, -0.04452269524335861, -0.035921476781368256, 0.00841443706303835, -0.008878106251358986, -0.00961302500218153, -0.032018423080444336, -0.024133987724781036, -0.00779667729511857, 0.005153636448085308, 0.07313369959592819, -0.00231906957924366, 0.018437398597598076, 0.04072999581694603, -0.033586785197257996, -0.03613291680812836, -0.021996792405843735, 0.01946929469704628, 0.015166339464485645, 0.027449017390608788, 0.029560547322034836, -0.008007395081222057, -0.009391727857291698, 0.005254894960671663, 0.03553050011396408, 0.03271780163049698, -0.00063377182232216, -0.049005743116140366, 0.025775544345378876, 0.01592240110039711, 0.03430044651031494, -0.0027016198728233576, -0.0330306738615036, 0.017405852675437927, -0.024000762030482292, 0.013561156578361988, 0.0647188201546669, 0.00016179034719243646, -0.017831752076745033, -0.014222655445337296, 0.008091302588582039, -0.043333929032087326, 0.03521093726158142, 0.014284470118582249, -0.03012179397046566, -0.014894223771989346, 0.012823614291846752, 0.02397453971207142, -0.042740099132061005, -0.03365696966648102, 0.012233917601406574, -0.013851150870323181, -0.0018768239533528686, -0.02254054695367813, 0.06436800956726074, -0.06637822836637497, 0.034709274768829346, 0.1046348363161087, -0.029792798683047295, 0.009003856219351292, 0.012787909246981144, -0.025163866579532623, -0.014081227593123913, 0.004209338687360287, 0.014923987910151482, -0.0256761834025383, -0.0023837299086153507, 0.024450652301311493, -0.028263993561267853, -0.03331988677382469, -0.009718021377921104, 0.02328149974346161, -0.037767112255096436, 0.014227368868887424, -0.015668446198105812, 0.0019476209999993443, 0.00867475289851427, -0.0765937864780426, -0.01449129730463028, 0.01003167126327753, 0.025557519868016243, 0.003770272945985198, -0.020493371412158012, -0.019638223573565483, 0.0034754800144582987, -0.03740191459655762, 0.03549502044916153, -0.015144553035497665, -0.018000012263655663, 0.031627897173166275, 0.05309870094060898, 0.03878309950232506, 0.028096932917833328, -0.03140696510672569, 0.013281871564686298, 0.010418071411550045, 0.013049005530774593, 0.008618460968136787, 0.02183656580746174, -0.0011184734757989645, 0.04332857206463814, 0.002970861503854394, -0.014222376048564911, 0.058815814554691315, -0.028239835053682327, -0.040106602013111115, 0.03317505866289139, -0.0274976659566164, -0.028349855914711952, 0.016879532486200333, 0.020028026774525642, -0.010273575782775879, 0.03750992193818092, -0.027580905705690384, -0.06074658781290054, -0.045263249427080154, 0.06115184351801872, 0.024669792503118515, -0.00264156237244606, -0.04330313205718994, 0.010373049415647984, -0.014033137820661068, -0.006179219577461481, -0.014256074093282223, -0.018487950786948204, -0.03582105413079262, 0.016275886446237564, -0.0006182623328641057, 0.008128626272082329, -0.03364524990320206, 0.02086797170341015, -0.012031623162329197, -0.06193959340453148, -0.02423909492790699, 0.022682951763272285, -0.06499285995960236, -0.011272882111370564, -0.008638237603008747, 0.04466685652732849, -0.0005295565933920443, -0.035249099135398865, 0.011713622137904167, -0.015474745072424412, -0.0013205440482124686, 0.05763039365410805, -0.017460638657212257, -0.05613969266414642, 0.009500217624008656, 0.07699882239103317, 0.028761640191078186, 0.010898030363023281, -0.011327109299600124, -0.018325602635741234, -0.03216414526104927, -0.009655035100877285, 0.007208541501313448, -0.0722571313381195, 0.03097604773938656, -0.027253251522779465, -0.014299142174422741, -0.015868397429585457, 0.05007243901491165, -0.0348423607647419, -0.027945322915911674, -0.012210652232170105, 0.035499636083841324, -0.05380544811487198, -0.03254016861319542, 0.002633500611409545, 0.005806042347103357, 0.048660289496183395, -0.034142617136240005, -0.058035653084516525, 0.006394847296178341, -0.031708452850580215, 0.006064188200980425, 0.05724656209349632, -0.03563583269715309, -0.05170150473713875, -0.09581559151411057, -0.0031126232352107763, -0.03960806131362915, 0.038220908492803574, -0.006394505966454744, 0.021812910214066505, 0.027239691466093063, 0.03285214677453041, -0.003753536380827427, -0.026897814124822617, 0.06423233449459076, -0.025549041107296944, -0.009102324955165386, -0.0017718833405524492, -0.029008898884058, -0.034774646162986755, 0.05623431131243706, 0.022213982418179512, 0.005656637251377106, -0.0020381121430546045, 0.007090546190738678, -0.005778035614639521, 0.025784695520997047, -0.01595900021493435, -0.02116452157497406, 0.05037861689925194, -0.03104790300130844, 0.013191883452236652, 0.0030482416041195393, 0.05810750648379326, 0.0492904894053936, 0.07818780839443207, 0.0006442603189498186, -0.022125443443655968, -0.024750130251049995, 0.002821623580530286, -0.04886247590184212, 0.03325650840997696, 0.0017020903760567307, 0.018687007948756218, 0.006303387228399515, 0.02424726076424122, -0.021118327975273132, -0.02257879078388214, -0.017508840188384056, -0.0039452300406992435, -0.004271599929779768, -0.0010283171432092786, -0.02456914819777012, 0.014620492234826088, -0.011561203747987747, 0.0025493695866316557, -0.08945644646883011, 0.038821205496788025, 0.05850384011864662, 0.0032496429048478603, -0.001494050258770585, 0.03358633816242218, -0.043318528681993484, 0.0013525673421099782, 0.00973509345203638, 0.00237140036188066, 0.024588339030742645, 0.018827933818101883, -0.018944310024380684, 0.028625009581446648, 0.030285870656371117, -0.00808706320822239, 0.027217155322432518, -0.04330090060830116, 0.028323130682110786, -0.02708069235086441, 0.026323961094021797, 0.016147680580615997, 0.010424653999507427, -0.015739301219582558, 0.011106517165899277, 0.0319095216691494, -0.022035622969269753, 0.0016723055159673095, -0.005198324099183083, 0.0033427560701966286, -0.037746917456388474, -0.05215674638748169, 0.02827559784054756, 0.007057853043079376, 0.0256655290722847, 0.02479677088558674, -0.05218888446688652, 0.039632610976696014, -0.025019481778144836, -0.0014387067640200257, -0.0037899392191320658, 0.003820874495431781, -0.03125030919909477, -0.011954410001635551, -0.0298533346503973, -0.03917691856622696, -0.0275180134922266, -0.03595015034079552, 0.013233447447419167, -0.04498976841568947, 0.02391752414405346, 0.0597066693007946, -0.02744879387319088, 0.013675106689333916, 0.013478116132318974, 0.03886896371841431, 0.006187755614519119, 0.0832320973277092, -0.026846738532185555, 0.016496367752552032, 0.012189955450594425, -0.019241448491811752, -0.021130265668034554, -0.010129415430128574, 0.03591187670826912, 0.03418712317943573, 0.031019115820527077, -0.011840041726827621, -0.028119970113039017, -0.004102333914488554, -0.08377571403980255, -0.058411356061697006, -0.014628303237259388, -0.0073417117819190025, 0.009914959780871868, -0.034190207719802856, -0.005446309689432383, 0.0006849311175756156, -0.0326559878885746, -0.04293759539723396, -0.02534380927681923, -0.030421050265431404, 0.002926076063886285, -0.021355656906962395, 0.02484145760536194, 0.008068540133535862, -0.01780553162097931, 0.012713820673525333, -0.03941882401704788, 0.019537579268217087, -0.04231899976730347, 0.013900693506002426, 0.000768852885812521, -0.00825114082545042, 0.0701255202293396, 0.0009972980478778481, -0.01979934610426426, -0.014732741750776768, -0.002510149497538805, -0.0032736605498939753, -0.011174469254910946, -0.009924611076712608, -0.03716268762946129, 0.004407120402902365, 0.02614334039390087, -0.01884818635880947, 0.007994117215275764, -0.02295682393014431, 0.0420764684677124, 0.023643484339118004, -0.012180916033685207, -0.022807586938142776, 0.04378649219870567, 0.01729527674615383, 0.03923940286040306, 0.0003632472362369299, -0.02712811529636383, -0.020127221941947937, 0.016544179990887642, -0.00515709538012743, -0.014552651904523373, 0.042291272431612015, 0.028434263542294502, -0.007061869837343693, 0.019150177016854286, -0.01954650692641735, -0.016914566978812218, 0.0475025400519371, -0.020649580284953117, 0.02170177735388279, -0.008819610811769962, -0.03432633355259895, 0.05817713215947151, -0.05076763778924942, -0.0005134891252964735, 0.03977788984775543, 0.05272636190056801, -0.0700460746884346, -0.01899290457367897, -0.0075101968832314014, 0.00827697105705738, 0.028025882318615913, 0.04586365818977356, -0.059258561581373215, 0.017685359343886375, 0.019320223480463028, 0.0122474180534482, 0.014716004021465778, 0.014744178391993046, 0.02210865169763565, -0.017463093623518944, 0.030344728380441666, -0.01971365138888359, -0.02899673767387867, 0.02536839246749878, 0.030788537114858627, -0.01567893661558628, -0.012389346957206726, -0.0164337158203125, -0.02727450430393219, 0.0012375846272334456, 0.016269108280539513, 0.04073992744088173, -0.027392780408263206, 0.027683110907673836, -0.04990730062127113, 0.009327046573162079, -0.05764467641711235, 0.06980148702859879, 0.01641201414167881, 0.007253619376569986, 0.03552503138780594, 0.050266142934560776, 0.02752712182700634, -0.018076837062835693, 0.03913095220923424, -0.047113798558712006, 0.013959941454231739, 0.0347130112349987, -0.007833330892026424, 0.015391549095511436, -0.01305862981826067, -0.054397713392972946, 0.027038181200623512, 0.07026432454586029, -0.06392616033554077, -0.03929084539413452, -0.023326821625232697, -0.02984619140625, -0.0047892178408801556, 0.022590843960642815, 0.009632774628698826, -0.028107335790991783, -0.024359244853258133, -0.012757671996951103, 0.048776112496852875, -0.026022136211395264, 0.01249914150685072, 0.0053896550089120865, -0.03995976224541664, -0.001355189480818808, 0.041087161749601364, 0.009370562620460987, 0.004791058599948883, -0.016357561573386192, -0.04828270152211189, -0.004562934394925833, -0.012064873240888119, -0.017316868528723717, -0.0721946507692337, -0.06088494509458542, -0.03201964870095253, -0.029732953757047653, -0.009372320026159286, 0.06887761503458023, 0.03077816031873226, 0.016479406505823135, 0.001433936762623489, 0.06095457077026367, -0.01486306544393301, -0.13356876373291016, -0.05411162227392197, -0.007502831984311342, -0.054387446492910385, -0.011197901330888271, 0.019364185631275177, 0.01709839515388012, -0.0033500646241009235, 0.011623465456068516, 0.05273696035146713, 0.00012095223064534366, -0.01641283929347992, 0.03422435000538826, 0.00609810184687376, -0.013006828725337982, -0.008665547706186771, -0.026904061436653137, 0.00842710118740797, 0.0016631907783448696, 0.006401988677680492, -0.004724724683910608, 0.014352003112435341, 0.019987469539046288, -0.017628872767090797, -0.01024230569601059, -0.01165593508630991, 0.05975732207298279, 0.02418575994670391, 0.006196905393153429, 0.04520399868488312, -0.04037841036915779, 0.0565137080848217, 0.01152170542627573, 0.04481983184814453, 0.02531561441719532, -0.020357493311166763, -0.015405342914164066, 0.0006071043317206204, -0.054909415543079376, -0.011409232392907143, 0.0087437080219388, -0.050664257258176804, -0.03172898665070534, 0.050861530005931854, 0.04358208179473877, -0.0014600184513255954, 0.04910749942064285, -0.06658889353275299, 0.01363769918680191, 0.01531574409455061, -0.010150113143026829, -0.005136564839631319, -0.010456343181431293, -0.0037545403465628624, -0.007348629180341959, -0.007315573748201132, 0.033995166420936584, -0.039274662733078, 0.037698715925216675, -0.03998163342475891, 0.01217717956751585, 0.011987884528934956, 0.046892423182725906, -0.005388846155256033, -0.02546066790819168, -0.014757176861166954, 0.004274612758308649, -0.044090885668992996, -0.05012867599725723, -0.001559658907353878, 0.023817772045731544, 0.020754104480147362, -0.00782752875238657, -0.06430552899837494, 0.029789062216877937, 0.04195212572813034, 0.009223947301506996, 0.030174681916832924, 0.014573629014194012, -0.04551463946700096, -0.04100548475980759, -0.051445312798023224, 0.04484962671995163, -0.02416422590613365, -0.008851327002048492, -0.006876201368868351, -0.0504966638982296, -0.006871961522847414, 0.01833285577595234, 0.024722011759877205, -0.016506576910614967, 0.02045353688299656, 0.02139381691813469, 0.002282694447785616, 0.021176906302571297, 0.03238485008478165, 0.030947256833314896, -0.01740194670855999, 0.040927156805992126, 0.018257539719343185, 0.020330576226115227, 0.02818661369383335, 0.02111571468412876, 0.023179981857538223, -0.02116931974887848, -0.027575546875596046, -0.015910129994153976, -0.0795624703168869, 0.03904213011264801, -0.04399341717362404, -0.014630758203566074, 0.007061535492539406, 0.017312126234173775, 0.006566178053617477, 0.007352422922849655, 0.0294659286737442, -0.013821777887642384, -0.020154837518930435, 0.01053701527416706, 0.004062109626829624, -0.031318049877882004, -0.007823735475540161, -0.0388261154294014, 0.0027591390535235405, 0.004971649963408709, -0.02874244563281536, -0.009154600091278553, 0.006594518665224314, -0.010884920135140419, 0.006409798748791218, 0.010209082625806332, 0.009118547663092613, 0.01654903218150139, -0.013935839757323265, 0.02535165473818779, -0.005364633165299892, -0.005297881085425615, -0.04568289965391159, -0.030474606901407242, -0.0047396766021847725, -0.03981247916817665, -0.012568098492920399, -0.019353361800312996, 0.07065842300653458, -0.04424932599067688, -0.03879431262612343, 0.002386351814493537, 0.024101408198475838, 0.0029976684600114822, 0.05570051446557045, -0.055808521807193756, 0.03990798816084862, 0.022983547300100327, 0.023997638374567032, 0.05878903344273567, 0.021059971302747726, -0.00211130827665329, 0.07313459366559982, 0.03020670637488365, -0.06616935133934021, 0.05138104408979416, -0.0009026785846799612, 0.014691903255879879, 0.0008158697164617479, -0.017611494287848473, -0.028630197048187256, 0.08436128497123718, 0.028457581996917725, 0.0071507710963487625, -0.004520757123827934, -0.027482938021421432, 0.000503795628901571, 0.0028312753420323133, -0.014958145096898079, 0.044937100261449814, -0.030961986631155014, 0.017850719392299652, 0.004546197131276131, -0.026507509872317314, 0.020620599389076233, -0.01898994669318199, 0.01429445669054985, -0.021233590319752693, -0.005318774376064539, -0.012734482996165752, -0.05226006731390953, 0.015037800185382366, -7.698989065829664e-05, 0.02224896289408207, -0.001069378457032144, 0.0071364049799740314, -0.028092160820961, -0.03832445293664932, 0.006845294032245874, 0.005781383253633976, 0.01684282347559929, -0.011956529691815376, 0.046709880232810974, -0.0028558787889778614, 0.04565317928791046, 0.037477340549230576, -0.02403870038688183, -0.0030974482651799917, -0.011234800331294537, 0.020296990871429443, 0.004416207317262888, 0.0357099212706089, -0.039288051426410675, 0.0006339950487017632, 0.019149282947182655, -0.012298967689275742, -0.010015380568802357, 0.015221905894577503, 0.0026466669514775276, 0.004475902300328016, -0.04051319509744644, 0.05703277140855789, 0.05664955452084541, 0.056078989058732986, -0.02330433763563633, -0.037176523357629776, 0.061608873307704926, 0.023199619725346565, 0.006516887340694666, -0.0071214125491678715, 0.014034085907042027, -0.004593953490257263, 0.0018388309981673956, -0.04336918890476227, -0.015650872141122818, 0.018403764814138412, -0.016374243423342705, 0.015836654230952263, -0.039694201201200485, 0.0013733211671933532, -0.019124846905469894, 0.0002367718261666596, -0.05424729734659195, 0.030994122847914696, -0.02692369930446148, -0.0016109856078401208, -0.025020822882652283, -0.04652688652276993, 0.04564094915986061, -0.03955436497926712, -0.06208464875817299, 0.03464500606060028, -0.013441940769553185, -0.02546435222029686, 0.0045301299542188644, -0.041565392166376114, 0.007141537964344025, -0.015179616399109364, -0.03355598822236061, 0.012590684927999973, -0.01913689821958542, -0.02782280743122101, -0.02182351052761078, 0.02225961908698082, 0.020398583263158798, -0.06530706584453583, -0.011126712895929813, -0.0277753584086895, -0.0377732515335083, -0.06500803679227829, 0.0044650789350271225, 0.005462377332150936, 0.045227207243442535, 0.0639609694480896, -0.028674662113189697, 0.008761003613471985, 0.028341539204120636, 0.02175913006067276, -0.01818227954208851, -0.01638925075531006, 0.06151960790157318, 0.0036959610879421234, -0.035613514482975006, -0.06519994884729385, -0.0032936055213212967, 0.06449387222528458, -0.0008187429048120975, -0.0672534629702568, 0.001694223959930241, -0.04592391103506088, -0.0343276746571064, -0.0029521717224270105, 0.01506033819168806, -0.024530766531825066, -0.0422944538295269, -0.014840415678918362, -0.0017866117414087057, 0.004992069210857153, -0.014189683832228184, -0.021117448806762695, -0.04160444438457489, 0.0063484301790595055, 0.016624823212623596, 0.02222709357738495, 0.03675296530127525, 0.02001313306391239, -0.06132769212126732, -0.015704600140452385, 0.021822283044457436, -0.018610578030347824, 0.014797680079936981, 0.05640926957130432, 0.051336076110601425, -0.08115136623382568, -0.010242333635687828, -0.0350770428776741, -0.014551089145243168, -0.011493307538330555, 0.010676154866814613, 0.045147765427827835, -0.06744225323200226, 0.041737113147974014, -0.006552007049322128, -0.034074876457452774, 0.06167224794626236, -0.036563724279403687], index=0, object='embedding')], model='voyage-law-2', object='list', usage=Usage(prompt_tokens=None, total_tokens=6), meta={'usage': {'credits_used': 2}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/embedding-models/anthropic/voyage-multilingual-2.md # voyage-multilingual-2 {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `voyage-multilingual-2` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Optimized for multilingual retrieval and retrieval-augmented generation (RAG), this model surpasses alternatives like OpenAI v3 large and Cohere multilingual v3 across most languages, including French, German, Japanese, Spanish, and Korean. On average, it outperforms the second-best model by 5.6%, while maintaining strong performance in English. Additionally, it supports a large 32K context length. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema {% openapi src="" path="/v1/embeddings" method="post" %} [voyage-multilingual-2.json](https://3927338786-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FROMd1X5PuqtikJ48n2N9%2Fuploads%2Fgit-blob-9b27d31f5fb3f152e8456f5b1a40b26cbacc0653%2Fvoyage-multilingual-2.json?alt=media\&token=d3fe44f4-58c7-4e52-8593-c8fb6e05117d) {% endopenapi %} ## Code Example {% tabs %} {% tab title="Python" %}
import openai

# Initialize the API client
client = openai.OpenAI(
    # Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>
    api_key="<YOUR_AIMLAPI_KEY>",
    base_url="https://api.aimlapi.com/v1",
)

# Define the text for which to generate an embedding
text = "Laura is a DJ."

# Request the embedding
response = client.embeddings.create(
    input=text,
    model="voyage-multilingual-2"
)

# Print the embedding
print(response)
{% endtab %} {% tab title="JS" %} ```javascript import OpenAI from "openai"; import util from "util"; // Initialize the API client const client = new OpenAI({ // Insert your AIML API Key instead of apiKey: "", baseURL: "https://api.aimlapi.com/v1", }); // Define the text for which to generate an embedding const text = "Laura is a DJ."; const response = await client.embeddings.create({ input: text, model: "voyage-multilingual-2", }); // Convert embedding to a regular array (not TypedArray) const pythonLikeResponse = { ...response, data: response.data.map(item => ({ ...item, embedding: Array.from(item.embedding), })), }; // Python-like print console.log( util.inspect(pythonLikeResponse, { depth: null, maxArrayLength: null, compact: true, }) ); ``` {% endtab %} {% endtabs %} This example shows how to set up an API client, send text to the embedding API, and print the response with the embedding vector. See how large a vector response the model generates from just a single short input phrase.
Response {% code overflow="wrap" %} ```json CreateEmbeddingResponse(data=[Embedding(embedding=[0.057845164090394974, -0.0009027316118590534, 0.0018743404652923346, -0.013910274021327496, -0.01372489146888256, -0.022771991789340973, -0.046326544135808945, 0.03561760112643242, 0.012907890602946281, 0.0007679080590605736, 0.0006164147634990513, -0.029242057353258133, 0.010541332885622978, -0.045086756348609924, 0.04110909253358841, 0.04766305536031723, -0.02904568240046501, -0.032282181084156036, 0.03853169083595276, 0.05892375484108925, -0.01231437362730503, -0.04767551273107529, 0.011599955148994923, 0.02466391772031784, -0.009037940762937069, 0.013361179269850254, 0.0178985595703125, 0.033255256712436676, -0.018267124891281128, 0.009587126784026623, 0.05244050174951553, -0.012508914805948734, -0.018404515460133553, -0.057481732219457626, 0.029757903888821602, -0.027167314663529396, -0.05003126338124275, 0.0206646416336298, 0.04715307056903839, -0.0335824228823185, 0.019010121002793312, -0.0026166026946157217, -0.03323107585310936, -0.04417522996664047, -0.028019119054079056, 0.031471043825149536, -0.01441586297005415, -0.006250427570194006, -0.015249716117978096, -0.024323927238583565, -0.028204500675201416, -0.016545195132493973, -0.024619953706860542, -0.017242394387722015, 0.041103046387434006, 0.026019664481282234, 0.03304935619235039, 0.02037154696881771, 0.09659228473901749, -0.02665696293115616, 0.02161719836294651, -0.004972717724740505, 0.022623611614108086, 0.02270311303436756, -0.001752889365889132, -0.02740435115993023, -0.023277578875422478, 0.004533441737294197, -0.048721130937337875, -0.024136347696185112, -0.03798653557896614, -0.001716802129521966, 0.028371566906571388, -0.005371143575757742, -0.037907399237155914, -0.044046271592378616, -0.00234915385954082, 0.007551584858447313, 0.06384774297475815, -0.02369963563978672, -0.028201570734381676, -0.038272302597761154, 0.009272050112485886, -0.04817670211195946, 0.008809693157672882, -0.05541248247027397, 0.01776226982474327, -0.019339853897690773, 0.010907519608736038, 0.010341846384108067, -0.027222635224461555, 0.029933759942650795, -0.006886809132993221, -0.020414777100086212, 0.023778771981596947, 0.014393880032002926, -0.04636465013027191, -0.032630231231451035, -0.04958575963973999, -0.03882332146167755, 0.0035010159481316805, -0.019186710938811302, -0.02101488970220089, 0.007120735477656126, 0.01952468603849411, 0.04418402537703514, 0.015531454235315323, -0.030933212488889694, 0.015098681673407555, -0.006455044262111187, -0.008143635466694832, 0.0013269862392917275, 0.05993200093507767, -0.030396848917007446, 0.004764253739267588, 0.0021702744998037815, -0.04451668635010719, -0.0035112742334604263, 0.027741411700844765, -0.009438381530344486, 0.0071203685365617275, -0.009677986614406109, -0.010770496912300587, 0.04360516369342804, 0.04361322149634361, 0.03371550515294075, -0.04943041875958443, -0.003502481384202838, -0.020469732582569122, 0.03891564905643463, -0.01905994676053524, -0.01611141487956047, 0.07288824766874313, -0.00537828728556633, -0.014172228053212166, 0.025775479152798653, 0.03485628589987755, -0.019015250727534294, -0.03625214844942093, 0.0402580201625824, 0.09400133043527603, -0.03186379000544548, 0.006665934808552265, -0.0055109127424657345, 0.03123290278017521, 0.019058482721447945, -0.043079059571027756, 0.029966000467538834, -0.021543558686971664, -0.025723455473780632, 0.006937551312148571, -0.023517917841672897, 0.0011394055327400565, 0.041642893105745316, -0.013444253243505955, 0.007449001539498568, -0.006767097860574722, 0.019321534782648087, 0.03649028763175011, 0.00965746957808733, 0.045523468405008316, 0.011718017049133778, -0.034857749938964844, -0.0014097854727879167, 0.04307026416063309, 0.03536480292677879, 0.031992752104997635, 0.03766816481947899, 0.01964760199189186, 0.007684942800551653, -0.001420295680873096, -0.015039420686662197, -0.006210676394402981, -0.01918451301753521, 0.012890716083347797, -0.0032767984084784985, 0.010368957184255123, -0.04118639603257179, -0.016430888324975967, -0.07126303762197495, -0.01041053980588913, 0.07100512087345123, -0.03611805662512779, 0.01493299100548029, 0.009442045353353024, 0.027436595410108566, 0.04818769916892052, 0.05246980860829353, 0.021413495764136314, 0.030949333682656288, 0.0370713472366333, -0.030329439789056778, 0.027228495106101036, 0.019177917391061783, -0.006164513994008303, -0.04813786968588829, -0.030197545886039734, 0.009045039303600788, -0.031243160367012024, 0.014569737017154694, -0.016682950779795647, 0.03158827871084213, 0.014127896167337894, 0.024068569764494896, 0.06960046291351318, 0.01983719691634178, 0.003191801020875573, 0.004835512954741716, -0.02859431691467762, -0.012031537480652332, 0.04464711248874664, 0.004305561073124409, 0.02133655920624733, -0.054638709872961044, 0.003623016644269228, -0.005689701065421104, -0.0195347610861063, 0.009204455651342869, 0.06583493202924728, -0.007590419612824917, -0.020083580166101456, 0.009608376771211624, -0.014942699111998081, -0.03729923069477081, -0.028290964663028717, 0.025601821020245552, 0.01639571785926819, 0.002022353233769536, -0.04055257886648178, -0.037996795028448105, 0.007241636980324984, 0.015836775302886963, 0.013989408500492573, 0.030728045850992203, -0.021143849939107895, 0.03177732601761818, -0.006387998815625906, -0.016457999125123024, -0.032798394560813904, -0.013876567594707012, 0.07038522511720657, 0.03883577883243561, -0.021638356149196625, -0.03750805929303169, 0.07091718912124634, -0.04667240008711815, 0.026717044413089752, -0.003357491223141551, -0.0483371764421463, -0.033371761441230774, -0.04099075868725777, 0.008803099393844604, 0.0508284792304039, 0.0024781154934316874, 0.03905486688017845, 0.01993776485323906, -0.06363672018051147, -0.0055959103628993034, -0.012547474354505539, -0.017691195011138916, -0.024845635518431664, 0.015546474605798721, 0.0263990368694067, -0.02122298628091812, 0.03110833652317524, 0.03882625326514244, 0.04800964146852493, -0.007528136484324932, -0.041631169617176056, -0.02029900625348091, 0.0009921254822984338, -0.05200086161494255, 0.008155359886586666, 0.015196959488093853, -0.016482912003993988, -0.07117511332035065, 0.03370881825685501, -0.0021044197492301464, -0.006149126682430506, 0.03757254034280777, -0.05066874250769615, -0.017506545409560204, 0.014065613970160484, 0.10154558718204498, 0.005148940719664097, 0.015364022925496101, -0.0566786527633667, 0.03101234883069992, -0.03444741666316986, -0.005016833543777466, -0.01653713546693325, -0.004551760386675596, 0.010944155976176262, 0.012217652052640915, 0.025062525644898415, -0.07248817384243011, 0.0432402603328228, 0.0178410392254591, -0.05529671162366867, 0.035077571868896484, 0.026747819036245346, -0.013445718213915825, 0.031428541988134384, 0.023121874779462814, -0.050834089517593384, 0.014191645197570324, 0.07455595582723618, -0.05089442431926727, 0.06400015205144882, 0.006336340680718422, -0.003738697385415435, -0.024789949879050255, -0.049121204763650894, -0.021647972986102104, -0.0075376625172793865, -0.016145853325724602, 0.026882095262408257, 0.023737555369734764, -0.0028968744445592165, -0.0005803274689242244, 0.018627632409334183, 0.008923817425966263, 0.03402096405625343, -0.0006155446171760559, -0.043823517858982086, -0.07041599601507187, -0.03705596178770065, 0.002571905730292201, -0.0168021097779274, 0.03527247905731201, -0.012257220223546028, -0.02173427864909172, -0.02162306010723114, -0.02734573557972908, 0.035801514983177185, 0.03520360216498375, -0.003950916230678558, 0.007991409860551357, 0.030921489000320435, -0.03044521063566208, -0.007312712259590626, 0.017868516966700554, -0.002456133486703038, -0.04013785347342491, -0.02091377042233944, -0.03032064624130726, 0.029716504737734795, 0.04513658210635185, -0.009101688861846924, 0.025754230096936226, 0.03669453784823418, -0.0242554172873497, -0.0586775541305542, -0.03932158276438713, -0.04619758576154709, 0.02513946406543255, 0.011140162125229836, -0.02658662013709545, -0.02459430694580078, 0.01762891188263893, 0.03304349258542061, -0.017527062445878983, 0.028978271409869194, 0.005245662294328213, -0.01639150269329548, 0.07581187039613724, 0.011413106694817543, 0.0013929326087236404, -0.005938097834587097, 0.035190410912036896, -0.010111766867339611, -0.01561919879168272, -0.029244987294077873, 0.02467857301235199, -0.03860844671726227, -0.03076614812016487, 0.011536847800016403, 0.04979971796274185, -0.025591563433408737, -0.009136494249105453, -0.03457418084144592, -0.03322118520736694, -0.00378531776368618, -0.01686466857790947, -0.023610243573784828, -0.02731715701520443, 0.03662804514169693, -0.015455249696969986, -0.010230286978185177, 0.026115605607628822, -0.024274468421936035, -0.00718411710113287, -0.0013951306464150548, -0.021908828988671303, 0.028307083994150162, -0.021137990057468414, -0.0019109774148091674, -0.022319160401821136, -0.008841750212013721, -0.02746443636715412, 0.006952938623726368, 0.005442768335342407, -0.010238896124064922, 0.016415134072303772, -0.05650865286588669, -0.010008450597524643, -0.016973845660686493, -0.0031368457712233067, -0.0053885458037257195, -0.02880900911986828, -0.03755861893296242, 0.015317128039896488, -0.035344287753105164, 0.0690225139260292, -0.05149087682366371, -0.01810739003121853, -0.016629641875624657, 0.01870218850672245, 0.005341650918126106, 0.004787976387888193, -0.0033903727307915688, -0.012824357487261295, 0.006840829737484455, 0.014728100039064884, 0.05206534266471863, 0.0869135633111, 0.003285591257736087, 0.004043607506901026, 0.029869280755519867, 0.018678924068808556, -0.018707500770688057, 0.01013521384447813, -0.024798739701509476, 0.029745811596512794, -0.047047559171915054, 0.033333659172058105, -0.005262515041977167, -0.05391769856214523, -0.012925475835800171, -0.029549438506364822, 0.026096416637301445, 0.043419044464826584, -0.05861014500260353, -0.017705850303173065, -0.015149331651628017, 0.03757473826408386, -0.008793573826551437, -0.04183486849069595, 0.013096203096210957, -0.018717026337981224, -0.0091738635674119, 0.03411219269037247, -0.018549229949712753, -0.012181198224425316, 0.007967046461999416, 0.014891407452523708, 0.029702214524149895, 0.028654400259256363, 0.006773784290999174, 0.022656220942735672, -0.03220817446708679, 0.01590551622211933, 0.0017717572627589107, 0.009474744088947773, 0.0054181297309696674, -0.017475588247179985, -0.038662850856781006, 0.0070033143274486065, -0.029505109414458275, -0.0012638333719223738, 0.05868781358003616, 0.011697225272655487, -0.008891209959983826, 0.021945463493466377, -0.0075046890415251255, 0.014229013584554195, 0.0035515748895704746, -0.030845284461975098, -0.09485716372728348, 0.018412208184599876, 0.023707697167992592, 0.07369179278612137, 0.022291315719485283, -0.05279221385717392, -0.010190902277827263, 0.05164328217506409, 0.031188298016786575, 0.002980040153488517, -0.08695826679468155, 0.0429794043302536, 0.02561354450881481, 0.029860487207770348, 0.03285078704357147, 0.08706597983837128, 0.04027707129716873, -0.018498672172427177, -0.0030276679899543524, -0.0067353155463933945, 0.019855698570609093, 0.026196803897619247, 0.0018959562294185162, 0.0018670131685212255, -0.06255611777305603, 0.0003326624573674053, 0.03328383341431618, -0.0029316795989871025, -0.03356941416859627, -0.0012053519021719694, 0.009081904776394367, -0.028531301766633987, 0.01601908914744854, -0.05138242617249489, 0.05166233703494072, 0.006489116232842207, -0.02298448607325554, -0.04391730949282646, -0.0021850208286195993, -0.02515668421983719, -0.06237054988741875, -0.0012060846202075481, -0.07159717381000519, 0.028786294162273407, 0.021236909553408623, -0.005362899973988533, -0.0021058854181319475, -0.016253840178251266, 0.0015795972431078553, -0.04117174446582794, 0.008954042568802834, 0.012817030772566795, 0.0430087111890316, 0.016087349504232407, -0.007523740641772747, -0.015479063615202904, 0.002503028605133295, 0.02588319033384323, -0.06546562910079956, 0.03749120607972145, 0.012339286506175995, 0.040704987943172455, -0.02914973348379135, -0.004227890633046627, -0.026813767850399017, -0.0640060156583786, -0.006373709999024868, -0.03301382064819336, 0.005635111592710018, -0.019742857664823532, 0.047415394335985184, -0.030029015615582466, 0.00021505821496248245, 0.004114488139748573, -0.022802766412496567, 0.0030044037848711014, 0.01207650825381279, 0.0293387770652771, 0.0003573923313524574, -0.027043480426073074, 0.03645658120512962, 0.03444448858499527, 0.020135605707764626, 0.029239125549793243, -0.03881965950131416, 0.018692845478653908, -0.05331685394048691, 0.014688440598547459, -0.010225065983831882, 0.006867208518087864, 0.0084206098690629, -0.006458891090005636, -0.07007893919944763, 0.03593926876783371, -0.011912833899259567, -0.029922036454081535, 0.028507854789495468, 0.005577591713517904, 0.0030360945966094732, 0.016278479248285294, 0.02660713531076908, -0.026481103152036667, -0.005822600796818733, 0.057106565684080124, 0.0060425130650401115, 0.016240376979112625, 0.036557700484991074, -0.012101879343390465, -0.004008070100098848, 0.03735161945223808, 0.03224238380789757, 0.014100052416324615, 0.0008642629254609346, -0.005151322111487389, 0.009142355993390083, 0.024317516013979912, -0.00884046871215105, -0.0005561471916735172, -0.009329204447567463, 0.06919965893030167, -0.06338465213775635, 0.008922352455556393, 0.026835748925805092, 0.010519168339669704, -0.03307133913040161, 0.04510214552283287, -0.01816747337579727, -0.01031180378049612, 0.007164699956774712, -0.0312790647149086, -0.020532015711069107, -0.02288922853767872, -0.01958330348134041, 0.029145335778594017, 0.001333580818027258, -0.020211076363921165, -0.023764848709106445, 0.0097893625497818, -0.024109236896038055, -0.00482800230383873, -0.02953515015542507, -0.006391295697540045, -0.03866212069988251, 0.004425729624927044, 0.032025355845689774, 0.04346887394785881, 0.005736229475587606, 0.00733616016805172, 0.012607468292117119, 0.0201000664383173, 0.028198640793561935, -0.05013091117143631, -0.023890148848295212, -0.024855894967913628, 0.016712257638573647, -0.058840591460466385, -0.06298457831144333, -0.02979307435452938, -0.00593296904116869, 0.03960295394062996, 0.08233029395341873, 0.022603461518883705, 0.038143340498209, -0.009229734539985657, -0.014767575077712536, -0.05690726637840271, 0.0736502930521965, 0.03506511449813843, 0.00795898586511612, -0.02190498076379299, -0.014072941616177559, -0.004499552771449089, -0.01748456433415413, 0.011000942438840866, 0.03698268532752991, 0.02564871497452259, 0.024660620838403702, 0.04296181723475456, 0.007136855740100145, -0.02447633631527424, 0.019411660730838776, 0.002154245972633362, 0.04516588896512985, -0.011527413502335548, -0.04524649307131767, 0.05214410647749901, -0.05337253957986832, -0.038242995738983154, 0.01305150706321001, -0.02641955390572548, 0.002049830975010991, -0.002212315332144499, 0.061227478086948395, -0.036774590611457825, 0.022928796708583832, -0.0519862063229084, 0.03575168922543526, -0.04780374467372894, 0.00336033059284091, 0.0033691232092678547, 0.029873674735426903, 0.011861542239785194, 0.05260756239295006, 0.019306879490613937, 0.02792038396000862, 0.03414259850978851, -0.0017593008233234286, -0.01442905142903328, -0.027037985622882843, -0.021172426640987396, 0.020393528044223785, 0.0554916188120842, 0.009962196461856365, 0.02529645338654518, 0.03517429530620575, 0.010045819915831089, 0.0375058613717556, -0.06613681465387344, 0.033504385501146317, 0.03355934098362923, 0.06702489405870438, -0.006382137071341276, -0.042662858963012695, -0.019240200519561768, 0.008492968045175076, -0.017111599445343018, -0.03916896879673004, -0.0035083433613181114, 0.002014476340264082, 0.0060454439371824265, 0.009458165615797043, 0.00035464458051137626, -0.018818143755197525, -0.01000185590237379, 0.02740362100303173, 0.04639102518558502, 0.012544453144073486, -0.0037281643599271774, 0.03146957606077194, -0.030648911371827126, 0.018451042473316193, -0.027471764013171196, 0.0059256418608129025, 0.02417444996535778, -0.059920281171798706, 0.040218453854322433, -0.01118229515850544, -0.004545532166957855, 0.022634604945778847, 0.051191918551921844, -0.012872719205915928, -0.03148569539189339, 0.04341684654355049, -0.06210090219974518, 0.030177028849720955, 0.03685225918889046, 0.02763296850025654, 0.0029009045101702213, -0.08340302109718323, -0.04452694207429886, -0.015347902663052082, -0.016872361302375793, -0.01572306454181671, -0.024205774068832397, -0.010738257318735123, 0.04418255761265755, -0.01665364019572735, -0.006684023886919022, -0.01319365669041872, 0.01918744295835495, 0.021126264706254005, -0.04279915243387222, -0.023488609120249748, -0.058640554547309875, 0.017625248059630394, 0.0012222048826515675, -0.00740064075216651, -0.0065489262342453, 0.0026385849341750145, -0.002881120890378952, 0.005874671041965485, 0.017962858080863953, 0.021306518465280533, -0.022814489901065826, -0.02079213783144951, -0.04342307895421982, 0.038663219660520554, -0.06817382574081421, -0.04299332574009895, -0.022769244387745857, 0.011341298930346966, 0.004675592761486769, -0.0017335634911432862, 0.03858005255460739, -0.0016050597187131643, -0.007442040368914604, 0.010117262601852417, -0.010849266313016415, -0.013963030651211739, 0.010359248146414757, -0.011741372756659985, 0.03153112530708313, 0.03391691669821739, -0.10248935222625732, -0.018841590732336044, -0.044060926884412766, 0.05467681214213371, 0.034804992377758026, 0.01901378482580185, 0.006955137010663748, -0.00043378013651818037, -0.016307787969708443, 0.01833343878388405, 0.019805140793323517, -0.02074231021106243, 0.03904900699853897, 0.021357810124754906, -0.0016420629108324647, 0.02659870870411396, -0.05513111129403114, -0.050720036029815674, -0.0075733838602900505, 0.02621585503220558, -0.004296767991036177, -0.0211570393294096, 0.037041306495666504, 0.012433167546987534, -0.0646156519651413, 0.02500024437904358, 0.01616930030286312, 0.0670454129576683, 0.005961362738162279, 0.009647211991250515, 0.047027040272951126, 0.023166203871369362, -0.05796826630830765, 0.03150767832994461, -0.059228572994470596, -0.01618010923266411, -0.0021747625432908535, -0.005702157039195299, -0.02735745906829834, -0.028260190039873123, 0.04713548347353935, 0.029802601784467697, -0.014292028732597828, -0.0037303625140339136, -0.016095753759145737, 0.007123666349798441, -0.007317108567804098, 0.0026128473691642284, -0.017491890117526054, 0.00951824989169836, 0.0043143536895513535, -0.029180508106946945, 0.023087069392204285, -0.014172593131661415, 0.032633163034915924, -0.029881004244089127, -0.01117057166993618, 0.04170737415552139, -0.0323503278195858, 0.03645511716604233, 0.032149557024240494, -0.025000976398587227, 0.031025536358356476, -0.023846914991736412, 0.007368400227278471, 0.011295869015157223, 0.05060279369354248, 0.029210548847913742, 0.017327941954135895, -0.04537985101342201, 0.04985174536705017, 0.05425769090652466, -0.07071788609027863, 0.03828782960772514, -0.005519339349120855, 0.0008235960267484188, -0.01438458263874054, 0.00419867318123579, -0.020005909726023674, 0.020385468378663063, 0.01701268181204796, 0.059005822986364365, 0.025953534990549088, 0.008850726298987865, 0.007349349558353424, 0.0373900905251503, -0.018437854945659637, -0.04028879478573799, -0.019248994067311287, 0.019583120942115784, -0.010924371890723705, -0.041212040930986404, 0.028338178992271423, 0.009138692170381546, -0.012362733483314514, -0.03297315165400505, -0.048976119607686996, 0.01534423977136612, -0.041257474571466446, 0.00857448484748602, -0.019805872812867165, 0.049821700900793076, -0.014710422605276108, -0.015567861497402191, -0.014404870569705963, 0.0078065767884254456, -0.017839940264821053, 0.023479081690311432, -0.008627974428236485, -0.035085633397102356, -0.011232120916247368, 0.011116348206996918, -0.008819951675832272, 0.026842709630727768, -0.031918011605739594, 0.0198249239474535, -0.013773984275758266, 0.006071456242352724, -0.0351303294301033, 0.0259352158755064, -0.013237621635198593, -0.0016808980144560337, -0.01634167693555355, 0.03793817386031151, -0.05814998224377632, -0.007939934730529785, -0.016123231500387192, 0.02045581117272377, -0.04204736277461052, -0.010636406019330025, -0.07381296902894974, -0.053354959934949875, 0.005346047226339579, -0.08725428581237793, -0.04446759074926376, 0.011005706153810024, 0.0302942655980587, 0.0241978969424963, 0.05837566778063774, 0.029894191771745682, 0.06929931044578552, -0.020214740186929703, -0.0361444354057312, 0.000857301929499954, -0.005784223321825266, -0.012699426151812077, 0.0178981926292181, -0.007309781387448311, -0.004630529787391424, 0.006477254908531904, -0.02653459459543228, 0.024214016273617744, 0.006005144212394953, 0.06689006835222244, 0.0043260776437819, -0.021520476788282394, 0.002120173769071698, 0.043079059571027756, 0.043134741485118866, -0.010642267763614655, 0.014144612476229668, -0.035096075385808945, -0.003447800874710083, -0.005579057149589062, 0.01145157590508461, 0.01145157590508461, 0.027062438428401947, 0.07839696854352951, 0.05562497675418854, 0.04545898735523224, -0.011273520067334175, 0.011593909934163094, 0.04201805219054222, -0.014115439727902412, -0.014988861978054047, -0.009956425987184048, -0.032844189554452896, -0.030510425567626953, 0.0076351165771484375, -0.004629430361092091, 0.02782714180648327, -0.0340891107916832, 0.034775685518980026, -0.0013936653267592192, -0.05642658844590187, -0.01758861169219017, -0.0068441275507211685, 0.03825618326663971, 0.049726445227861404, -0.006399722304195166, -0.01911490224301815, 0.04333111643791199, -0.020998219028115273, 0.00421030493453145, -0.027794167399406433, 0.001966665266081691, 0.0027712101582437754, -0.04785320162773132, 0.006933154538273811, -0.004050568677484989, 0.005927839782088995, 0.06081714481115341, 0.023868165910243988, -0.035575833171606064, -0.023879889398813248, 3.370588819961995e-05, 0.014758416451513767, -0.009112313389778137, 0.010482348501682281, 0.03294677287340164, -0.06620422750711441, -0.035567041486501694, -0.04983196035027504, 0.01353511307388544, -0.0015563326887786388, 0.01810739003121853, -0.001902184565551579, -0.046697311103343964, -0.02415979467332363, 0.06844493746757507, 0.05294462665915489, 0.059990618377923965, -0.034081049263477325, -0.06128023564815521, 0.0006209943676367402, -0.022921469062566757, -0.011540602892637253, -0.03592168539762497, -0.04806166887283325, 0.03494274616241455, -0.0003458517312537879, -0.008194927126169205, -0.015179373323917389, -0.04232708737254143, -0.017277931794524193, 0.021506555378437042, 0.03709149733185768, 0.04129704087972641, -0.009954961016774178, -0.01002676971256733, 0.009094728156924248, -0.030097024515271187, 0.0015937023563310504], index=0, object='embedding')], model='voyage-multilingual-2', object='list', usage=Usage(prompt_tokens=None, total_tokens=7), meta={'usage': {'credits_used': 2}}) ``` {% endcode %}
You can find a more advanced example of using embedding vectors in our article [Find Relevant Answers: Semantic Search with Text Embeddings](https://docs.aimlapi.com/use-cases/find-relevant-answers-semantic-search-with-text-embeddings) in the Use Cases section. --- # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/wan-2-6-image.md # wan2.6-image {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan-2-6-image` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview Text-to-Image and Image-to-Image generator in a single model, providing artists and creators with complete creative freedom. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan-2-6-image"]},"prompt":{"type":"string","maxLength":2000,"description":"A positive prompt that describes the desired elements and visual features in the edited image."},"image_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":3,"description":"List of URLs or local Base64 encoded images to edit."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":1440},"height":{"type":"integer","minimum":512,"maximum":1440}},"required":["width","height"],"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3","description":"The size of the generated image."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"negative_prompt":{"type":"string","maxLength":500,"description":"The description of elements to avoid in the generated image."},"seed":{"type":"integer","minimum":0,"maximum":2147483647,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."}},"required":["model","prompt"],"title":"alibaba/wan-2-6-image"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image using two input images and a prompt that defines how they should be edited. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/wan-2-6-image", "prompt": "Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.", "image_urls": [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/wan-2-6-image', prompt: 'Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect.', image_urls: [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/t-rex.png", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/blue-mug.jpg" ] }), }); const data = await response.json(); console.log(data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 ``` {% endcode %}
Reference ImagesGenerated Image

Image #1

"Combine the images so the T-Rex is wearing a business suit, sitting in a cozy small café, drinking from the mug. Blur the background slightly to create a bokeh effect."

Image #2

--- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.1-plus-text-to-video.md # Wan 2.1 Plus (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.1-t2v-plus` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A text-to-video (T2V) model that generates 720p silent video at \~30 FPS[^1]. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.1-t2v-plus"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["720P"],"default":"720P","description":"An enumeration where the short side of the video frame determines the resolution."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1","4:3","3:4"],"default":"16:9","description":"The aspect ratio of the generated video."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"watermark":{"type":"boolean","default":false,"description":"Whether the video contains a watermark."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."}},"required":["model","prompt"],"title":"alibaba/wan2.1-t2v-plus"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/alibaba/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "alibaba/wan2.1-t2v-plus", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/alibaba/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.1-t2v-plus", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: a1bdaedd-4bd2-4cd3-8af4-4d9e6ebbce62 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'a1bdaedd-4bd2-4cd3-8af4-4d9e6ebbce62', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/alpaca/1d/9d/20250804/8ca6ec02/a1bdaedd-4bd2-4cd3-8af4-4d9e6ebbce62.mp4?Expires=1754374087&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=8%2F2foeH6dJyUQBdXSSKn9qnJxQQ%3D'}} ``` {% endcode %}
**Original**: [1280x720](https://drive.google.com/file/d/1xEDOCxL2kkLvg51wzKFFIVTJEnPm5IM_/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain,
then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming."

[^1]: Frame per second --- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.1-turbo-text-to-video.md # Wan 2.1 Turbo (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.1-t2v-turbo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A text-to-video (T2V) model that generates 480p and 720p silent video at \~30 FPS[^1]. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.1-t2v-turbo"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["480P","720P"],"default":"720P","description":"An enumeration where the short side of the video frame determines the resolution."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1","4:3","3:4"],"default":"16:9","description":"The aspect ratio of the generated video."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"watermark":{"type":"boolean","default":false,"description":"Whether the video contains a watermark."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."}},"required":["model","prompt"],"title":"alibaba/wan2.1-t2v-turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/alibaba/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "alibaba/wan2.1-t2v-turbo", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/alibaba/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.1-t2v-turbo", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '4f516080-dd10-4e3d-ba3d-97bb37b5323f', 'status': 'queued'} Generation ID: 4f516080-dd10-4e3d-ba3d-97bb37b5323f Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '4f516080-dd10-4e3d-ba3d-97bb37b5323f', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/alpaca/1d/9d/20250805/5259de3d/4f516080-dd10-4e3d-ba3d-97bb37b5323f.mp4?Expires=1754465864&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=0%2BryY6k77uerXq7p8Jkp3tY%2FgqQ%3D'}} ``` {% endcode %}
**Processing time**: \~1.5 min. **Original**: [1280x720](https://drive.google.com/file/d/1z5tcEgdLYjr1c6k1pwAinm3xr-H5tPIi/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain,
then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming."

[^1]: Frame per second --- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.2-14b-animate-move-image-to-video.md # Wan 2.2 Animate Move (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-14b-animate-move` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model produces high-quality character animations, accurately capturing expressions and movements from reference videos. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-14b-animate-move"]},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image. If the input image does not match the chosen aspect ratio, it is resized and center cropped"},"resolution":{"type":"string","enum":["480p","580p","720p"],"default":"480p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"num_inference_steps":{"type":"integer","default":20,"description":"Number of inference steps for sampling. Higher values give better quality but take longer"},"enable_safety_checker":{"type":"boolean","description":"If set to true, the safety checker will be enabled."},"shift":{"type":"number","default":5,"description":"Shift value for the video."},"video_quality":{"type":"string","enum":["low","medium","high","maximum"],"default":"high","description":"The quality of the generated video."},"video_write_mode":{"type":"string","enum":["fast","balanced","small"],"default":"balanced","description":"The write mode of the output video. Faster write mode means faster results but larger file size, balanced write mode is a good compromise between speed and quality, and small write mode is the slowest but produces the smallest file size"}},"required":["model","video_url","image_url"],"title":"alibaba/wan2.2-14b-animate-move"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "alibaba/wan2.2-14b-animate-move", "video_url": "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "resolution": "720p", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.2-14b-animate-move", video_url: "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", resolution: "720p", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-14b-animate-move Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-14b-animate-move","status":"completed","video":{"url":"https://v3b.fal.media/files/b/panda/4VjTJeQXFX3183b8Xe3d2_wan_animate_output.mp4"}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.2-14b-animate-replace-image-to-video.md # Wan 2.2 Animate Replace (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-14b-animate-replace` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model designed to integrate animated characters into reference videos by replacing the original subject while preserving the scene’s lighting, color tone, and overall visual consistency for seamless environmental blending. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-14b-animate-replace"]},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image. If the input image does not match the chosen aspect ratio, it is resized and center cropped"},"resolution":{"type":"string","enum":["480p","580p","720p"],"default":"480p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"num_inference_steps":{"type":"integer","default":20,"description":"Number of inference steps for sampling. Higher values give better quality but take longer"},"enable_safety_checker":{"type":"boolean","description":"If set to true, the safety checker will be enabled."},"shift":{"type":"number","default":5,"description":"Shift value for the video."},"video_quality":{"type":"string","enum":["low","medium","high","maximum"],"default":"high","description":"The quality of the generated video."},"video_write_mode":{"type":"string","enum":["fast","balanced","small"],"default":"balanced","description":"The write mode of the output video. Faster write mode means faster results but larger file size, balanced write mode is a good compromise between speed and quality, and small write mode is the slowest but produces the smallest file size"}},"required":["model","video_url","image_url"],"title":"alibaba/wan2.2-14b-animate-replace"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "alibaba/wan2.2-14b-animate-replace", "video_url": "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "resolution": "720p", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.2-14b-animate-replace", video_url: "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", resolution: "720p", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-14b-animate-replace Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-14b-animate-replace","status":"completed","video":{"url":"https://v3b.fal.media/files/b/panda/4VjTJeQXFX3183b8Xe3d2_wan_animate_output.mp4"}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.2-plus-text-to-video.md # Wan 2.2 Plus (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-t2v-plus` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A text-to-video (T2V) model that generates 480p and 1080p silent video at \~30 FPS[^1]. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-t2v-plus"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["480P","1080P"],"default":"1080P","description":"An enumeration where the short side of the video frame determines the resolution."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1","4:3","3:4"],"default":"16:9","description":"The aspect ratio of the generated video."},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"watermark":{"type":"boolean","default":false,"description":"Whether the video contains a watermark."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."}},"required":["model","prompt"],"title":"alibaba/wan2.2-t2v-plus"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/alibaba/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "alibaba/wan2.2-t2v-plus", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/alibaba/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.2-t2v-plus", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: dabf0a77-d01c-4982-8857-0ad0fa233053 Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'dabf0a77-d01c-4982-8857-0ad0fa233053', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/alpaca/1d/4b/20250804/e74f3495/dabf0a77-d01c-4982-8857-0ad0fa233053.mp4?Expires=1754382147&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=zrF4UJZaHf%2BmF1%2BuryfrMhk%2Bk58%3D'}} ``` {% endcode %}
**Original**: [1920x1080](https://drive.google.com/file/d/1X7oRE-TmJ7m9oJ1mt0Itd3z7lMlETufB/view?usp=sharing) **Low-res GIF preview**:

"A menacing evil dragon appears in a distance above the tallest mountain,
then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming."

[^1]: Frame per second --- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.5-preview-image-to-video.md # Wan 2.5 Preview (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.5-i2v-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This text-to-video model generates videos up to 1080p and can voice a character with full lip-sync by providing dialogue directly in the `prompt` parameter. In addition to the features of the [Wan 2.5 Preview (Text-to-Video)](https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.5-preview-text-to-video) model, it also supports uploading a reference image, which may depict the character to be animated or the surrounding scene. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas {% hint style="success" %} Now, all of our API schemas for video models use our new universal short URL — `https://api.aimlapi.com/v2/video/generations`.\ However, you can still call this model using the legacy URL that includes the vendor name. {% endhint %} ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.5-i2v-preview"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"resolution":{"type":"string","enum":["480p","720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"10"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."}},"required":["model","prompt","image_url"],"title":"alibaba/wan2.5-i2v-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/alibaba/generation" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "alibaba/wan2.5-i2v-preview", "prompt": '''Mona Lisa nervously puts on glasses with her hands and asks her off-screen friend to the left: ‘Do they suit me?’ She then tilts her head slightly to one side and then the other, so the unseen friend can better judge.''', "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/alibaba/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.5-i2v-preview", prompt: `Mona Lisa nervously puts on glasses with her hands and asks her off-screen friend to the left: ‘Do they suit me?’ She then tilts her head slightly to one side and then the other, so the unseen friend can better judge.`, image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", }); const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: c8a198bb-8b06-4640-91ee-f96caa792390:alibaba/wan2.5-i2v-preview Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': 'c8a198bb-8b06-4640-91ee-f96caa792390:alibaba/wan2.5-i2v-preview', 'status': 'completed', 'video': {'url': 'https://dashscope-result-sh.oss-cn-shanghai.aliyuncs.com/1d/15/20250927/1080b7c5/c8a198bb-8b06-4640-91ee-f96caa792390.mp4?Expires=1759009739&OSSAccessKeyId=LTAI5tKPD3TMqf2Lna1fASuh&Signature=3oH7JKM54LL2LwnoRKvTyyhvWeM%3D'}} ``` {% endcode %}
**Processing time**: \~3 min 40 sec. **Generated video** (786x1172, with sound): {% embed url="" fullWidth="false" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.5-preview-text-to-video.md # Wan 2.5 Preview (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.5-t2v-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This text-to-video model generates videos up to 1080p and can voice a character with full lip-sync by providing dialogue directly in the `prompt` parameter. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation You can generate a video using this API. In the basic setup, you only need a prompt.\ This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.5-t2v-preview"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"resolution":{"type":"string","enum":["480p","720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1"],"default":"16:9","description":"The aspect ratio of the generated video."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10],"default":"10"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."}},"required":["model","prompt"],"title":"alibaba/wan2.5-t2v-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/alibaba/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "alibaba/wan2.5-t2v-preview", "prompt": ''' A racoon is happily eating an ice cream. Suddenly, he pauses, looks directly into the camera, and says with full confidence: "Hello, two-legged!" His lip movements perfectly match the speech. Then, in a strong Irish accent, he adds: "Wanna some?" — while extending the half-eaten ice cream toward the camera. ''', "resolution": "1080p" } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/alibaba/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.5-t2v-preview", prompt: ` A racoon is happily eating an ice cream. Suddenly, he pauses, looks directly into the camera, and says with full confidence: "Hello, two-legged!" His lip movements perfectly match the speech. Then, in a strong Irish accent, he adds: "Wanna some?" — while extending the half-eaten ice cream toward the camera. `, resolution: '1080p' }); const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 10 * 1000; // 10 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '8736603f-7944-42b6-8a3b-8b2c1e246c34:alibaba/wan2.5-t2v-preview', 'status': 'queued', 'meta': {'usage': {'tokens_used': 1050000}}} Generation ID: 8736603f-7944-42b6-8a3b-8b2c1e246c34:alibaba/wan2.5-t2v-preview Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {'id': '8736603f-7944-42b6-8a3b-8b2c1e246c34:alibaba/wan2.5-t2v-preview', 'status': 'completed', 'video': {'url': 'https://dashscope-result-sh.oss-accelerate.aliyuncs.com/1d/e9/20250927/3c240a55/8736603f-7944-42b6-8a3b-8b2c1e246c34.mp4?Expires=1759007661&OSSAccessKeyId=LTAI5tKPD3TMqf2Lna1fASuh&Signature=L67uOyWei9u9WcvvY570igR8olU%3D'}} ``` {% endcode %}
**Processing time**: \~3 min. **Generated Video** (1920x1080, with sound): {% embed url="" %} `'''A raccoon is happily eating an ice cream. Suddenly, he pauses, looks directly into the camera, and says with full confidence: "Hello, two-legged!" His lip movements perfectly match the speech. Then, in a strong Irish accent, he adds: "Wanna some?" — while extending the half-eaten ice cream toward the camera.'''` {% endembed %} --- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.6-image-to-video.md # Wan 2.6 (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan-2-6-i2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model transforms images into dynamic video while preserving character identity, enabling consistent motion and synchronized audio. Compared to earlier versions, Wan 2.6 offers stronger instruction following, higher visual fidelity, and significantly enhanced sound generation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan-2-6-i2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"image_url":{"type":"string","format":"uri","description":"A direct link to an online image or a Base64-encoded local image that will serve as the visual base or the first frame for the video."},"audio_url":{"type":"string","format":"uri","description":"The URL of the audio file. The model will use this audio to generate the video."},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10,15],"default":"10"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"shot_type":{"type":"string","enum":["single","multi"],"default":"single","description":"Specifies the shot type of the generated video, that is, whether the video consists of a single continuous shot or multiple switched shots.\nThis parameter takes effect only when \"prompt_extend\" is set to 'true':\n- single: (default) Outputs a single-shot video.\n- multi: Outputs a multi-shot video."},"generate_audio":{"type":"boolean","default":true,"description":"Specifies whether to automatically add audio to the generated video.\nThis parameter takes effect only when 'audio_url' is not provided."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."}},"required":["model","prompt","image_url"],"title":"alibaba/wan-2-6-i2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `generation_id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } payload = { "model": "alibaba/wan-2-6-i2v", "prompt": "Mona Lisa puts on glasses with her hands.", "image_url": "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", "duration": "5", } response = requests.post(url, json=payload, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan-2-6-i2v", prompt: "Mona Lisa puts on glasses with her hands.", image_url: "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/mona_lisa_extended.jpg", duration: "5", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'V2cdWP9kao8xiofM-OvwG', 'status': 'queued', 'meta': {'usage': {'credits_used': 1575000}}} Generation ID: V2cdWP9kao8xiofM-OvwG Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'V2cdWP9kao8xiofM-OvwG', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/alpaca/1d/1b/20260107/0fa8e3c9/29195163-523901dd-f86f-434a-bf96-0223ec06c352.mp4?Expires=1767805107&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=WY0q7xM%2F9N9dhsW7OiJfHPOegkU%3D'}} ``` {% endcode %}
**Processing time**: \~ 2 min 52 sec. **Generated video** (1920x1080, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.6-reference-to-video.md # Wan 2.6 (Reference-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan-2-6-r2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model builds videos from reference video material with reliable character consistency, synchronized audio, and cinematic multi-shot storytelling. Compared to earlier versions, Wan 2.6 provides stronger instruction following, higher visual fidelity, and improved sound generation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find both corresponding API schemas.
## API Schemas ### Create a video generation task and send it to the server ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan-2-6-r2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_urls":{"type":"array","items":{"type":"string","format":"uri"},"minItems":1,"maxItems":3,"description":"An array of URLs for the uploaded reference video files. This parameter is used to extract the character's appearance and voice (if any) to generate a video that matches the reference features.\nEach reference video must contain only one character. For example, character1 is a little girl and character2 is an alarm clock."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1","4:3","3:4"],"default":"16:9","description":"The aspect ratio of the generated video."},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10,15],"default":"10"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"shot_type":{"type":"string","enum":["single","multi"],"default":"single","description":"Specifies the shot type of the generated video, that is, whether the video consists of a single continuous shot or multiple switched shots.\nThis parameter takes effect only when \"prompt_extend\" is set to 'true':\n- single: (default) Outputs a single-shot video.\n- multi: Outputs a multi-shot video."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."}},"required":["model","prompt","video_urls"],"title":"alibaba/wan-2-6-r2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Retrieve the generated video from the server After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `generation_id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Code Example The code below creates a video generation task, then automatically polls the server every **15** seconds until it finally receives the video URL. Two reference videos are supplied via URLs, and the prompt defines how the model should use them. {% tabs %} {% tab title="Python" %}
import requests
import time

# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
api_key = "<YOUR_AIMLAPI_KEY>"
base_url = "https://api.aimlapi.com/v2"

# Creating and sending a video generation task to the server
def generate_video():
    url = f"{base_url}/video/generations"
    headers = {
        "Authorization": f"Bearer {api_key}", 
        "Content-Type": "application/json"
    }

    payload = {
        "model": "alibaba/wan-2-6-r2v",
        "prompt": '''
Use the woman from the second reference video as Mona Lisa — keep her face, clothing, colors, lighting, camera angle, and background exactly as in the second reference video.
Do not replace her with a different person and do not change the environment.

Take the raccoon only from the first reference video — keep the same fur pattern, colors, proportions, and appearance.
Do not generate a different raccoon.

Place the raccoon gently in Mona Lisa’s arms, as if he is her pet. She softly pets the raccoon. The raccoon affectionately licks her face, and Mona Lisa reacts with a warm, joyful laugh.

The audio must be in English. Mona Lisa says the short line:
"Oh, you sweet little one!"
Synchronize her lip movement to this line only. Do not generate Chinese speech.

Keep the visual style, motion, framing, and atmosphere realistic and consistent with the second reference video.
''',
        "video_urls":[
            "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4",
            "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/monalisa.mp4"
            ],
        # "duration": "5",
    }
 
    response = requests.post(url, json=payload, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        return response_data

# Requesting the result of the task from the server using the generation_id
def get_video(gen_id):
    url = f"{base_url}/video/generations"
    params = {
        "generation_id": gen_id,
    }
    
    headers = {
        "Authorization": f"Bearer {api_key}", 
        "Content-Type": "application/json"
        }

    response = requests.get(url, params=params, headers=headers)
    return response.json()


def main():
    # Running video generation and getting a task id
    gen_response = generate_video()
    print(gen_response)
    gen_id = gen_response.get("id")
    print("Generation ID:  ", gen_id)

    # Try to retrieve the video from the server every 15 sec
    if gen_id:
        start_time = time.time()

        timeout = 1000
        while time.time() - start_time < timeout:
            response_data = get_video(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break

            status = response_data.get("status")
            
            if status in ["queued", "generating"]:
                print(f"Status: {status}. Checking again in 15 seconds.")
                time.sleep(15)
            else:
                print("Processing complete:\n", response_data)
                return response_data

        print("Timeout reached. Stopping.")
        return None  


if __name__ == "__main__":
    main()
{% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan-2-6-r2v", prompt: `Use the woman from the second reference video as Mona Lisa — keep her face, clothing, colors, lighting, camera angle, and background exactly as in the second reference video. Do not replace her with a different person and do not change the environment. Take the raccoon only from the first reference video — keep the same fur pattern, colors, proportions, and appearance. Do not generate a different raccoon. Place the raccoon gently in Mona Lisa’s arms, as if he is her pet. She softly pets the raccoon. The raccoon affectionately licks her face, and Mona Lisa reacts with a warm, joyful laugh. The audio must be in English. Mona Lisa says the short line: "Oh, you sweet little one!" Synchronize her lip movement to this line only. Do not generate Chinese speech. Keep the visual style, motion, framing, and atmosphere realistic and consistent with the second reference video.`, video_urls: [ "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/racoon-in-the-forest.mp4", "https://raw.githubusercontent.com/aimlapi/api-docs/main/reference-files/monalisa.mp4" ] }; const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 15 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': 'F4ydZjZhuZFinNQMnQleh', 'status': 'queued', 'meta': {'usage': {'credits_used': 3150000}}} Generation ID: F4ydZjZhuZFinNQMnQleh Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': 'F4ydZjZhuZFinNQMnQleh', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/alpaca/1d/ef/20260107/69015395/89757270-8987f96a-bdd1-439f-92f6-4ebc4e8a7a4d.mp4?Expires=1767873802&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=kBywVpc6QoOT0%2BqFhUSAmsw3fqo%3D'}} ``` {% endcode %}
**Processing time**: \~ 4 min 20 sec. **Generated video** (1920x1080, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan-2.6-text-to-video.md # Wan 2.6 (Text-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan-2-6-t2v` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} This model enables text-to-video generation with consistent characters, synchronized audio, and cinematic multi-shot storytelling in a single workflow. Compared to earlier versions, Wan 2.6 delivers stronger instruction following, higher visual fidelity, and dramatically improved sound generation. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan-2-6-t2v"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"audio_url":{"type":"string","format":"uri","description":"The URL of the audio file. The model will use this audio to generate the video."},"aspect_ratio":{"type":"string","enum":["16:9","9:16","1:1","4:3","3:4"],"default":"16:9","description":"The aspect ratio of the generated video."},"resolution":{"type":"string","enum":["720p","1080p"],"default":"1080p","description":"An enumeration where the short side of the video frame determines the resolution."},"duration":{"type":"integer","description":"The length of the output video in seconds.","enum":[5,10,15],"default":"10"},"negative_prompt":{"type":"string","description":"The description of elements to avoid in the generated video."},"shot_type":{"type":"string","enum":["single","multi"],"default":"single","description":"Specifies the shot type of the generated video, that is, whether the video consists of a single continuous shot or multiple switched shots.\nThis parameter takes effect only when \"prompt_extend\" is set to 'true':\n- single: (default) Outputs a single-shot video.\n- multi: Outputs a multi-shot video."},"generate_audio":{"type":"boolean","default":true,"description":"Specifies whether to automatically add audio to the generated video.\nThis parameter takes effect only when 'audio_url' is not provided."},"seed":{"type":"integer","description":"Varying the seed integer is a way to get different results for the same other request parameters. Using the same value for an identical request will produce similar results. If unspecified, a random number is chosen."},"enhance_prompt":{"type":"boolean","default":true,"description":"Whether to enable prompt expansion."}},"required":["model","prompt"],"title":"alibaba/wan-2-6-t2v"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # Insert your AIML API Key instead of : aimlapi_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/generate/video/alibaba/generation" headers = { "Authorization": f"Bearer {aimlapi_key}", } data = { "model": "alibaba/wan-2-6-t2v", "prompt": ''' A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ''' } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/generate/video/alibaba/generation" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {aimlapi_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() print(gen_response) gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Try to retrieve the video from the server every 15 sec if gen_id: start_time = time.time() timeout = 1000 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status in ["queued", "generating"]: print(f"Status: {status}. Checking again in 15 seconds.") time.sleep(15) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript // Insert your AIML API Key instead of const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; const https = require("https"); const { URL } = require("url"); // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan-2-6-t2v", prompt: ` A menacing evil dragon appears in a distance above the tallest mountain, then rushes toward the camera with its jaws open, revealing massive fangs. We see it's coming. ` }); const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data) } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const result = JSON.parse(body); callback(result); } }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/generate/video/alibaba/generation`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json" } }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const result = JSON.parse(body); callback(result); }); }); req.on("error", (err) => { console.error("Request error:", err); callback(null); }); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("No generation ID received."); return; } const genId = genResponse.id; console.log("Generation ID:", genId); const timeout = 1000 * 1000; // 1000 sec const interval = 15 * 1000; // 15 sec const startTime = Date.now(); const checkStatus = () => { if (Date.now() - startTime >= timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; if (["queued", "generating"].includes(status)) { console.log(`Status: ${status}. Checking again in 15 seconds.`); setTimeout(checkStatus, interval); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }) } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 {'id': '08zqLA021WrqNkIw2wc3P', 'status': 'queued', 'meta': {'usage': {'credits_used': 3150000}}} Generation ID: 08zqLA021WrqNkIw2wc3P Status: queued. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Status: generating. Checking again in 15 seconds. Processing complete: {'id': '08zqLA021WrqNkIw2wc3P', 'status': 'completed', 'video': {'url': 'https://cdn.aimlapi.com/alpaca/1d/b5/20260107/52bfdaed/87539900-4dc6abdb-8918-41dd-8c5f-ef36d60e7f99.mp4?Expires=1767808493&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=eeo05FhwoaCrBXJ0oVWNrCFU8R8%3D'}} ``` {% endcode %}
**Processing time**: \~ 3 min 25 sec. **Generated video** (1920x1080, with sound): {% embed url="" %} --- # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/wan2.2-t2i-flash.md # wan2.2-t2i-flash {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-t2i-flash` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A text-to-image model offering up to a 12× increase in image generation speed. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-t2i-flash"]},"prompt":{"type":"string","maxLength":2000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"negative_prompt":{"type":"string","maxLength":500,"description":"The description of elements to avoid in the generated image."},"num_images":{"type":"integer","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":1440},"height":{"type":"integer","minimum":512,"maximum":1440}},"required":["width","height"],"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3","description":"The size of the generated image."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."},"seed":{"type":"integer","minimum":0,"maximum":2147483647,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."}},"required":["model","prompt"],"title":"alibaba/wan2.2-t2i-flash"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/wan2.2-t2i-flash", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 1440, "height": 512 }, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/wan2.2-t2i-flash', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1440, height: 512 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "created": 1769537085362, "data": [ { "url": "https://cdn.aimlapi.com/alpaca/1d/a0/20260128/652f5a73/7dd2072e-dd4e-4690-82f1-3c536ced6a14270158520.png?Expires=1769623479&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=%2FrkiDR9ZPPj4Lf2kTxQed%2Bkrg5s%3D" } ], "meta": { "usage": { "credits_used": 52500 } } } ``` {% endcode %}
We obtained the following 1440x512 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/wan2.2-t2i-plus.md # wan2.2-t2i-plus {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-t2i-plus` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview The Professional edition of a text-to-image model, designed for high-quality image generation with rich detail. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-t2i-plus"]},"prompt":{"type":"string","maxLength":2000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"negative_prompt":{"type":"string","maxLength":500,"description":"The description of elements to avoid in the generated image."},"num_images":{"type":"integer","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":1440},"height":{"type":"integer","minimum":512,"maximum":1440}},"required":["width","height"],"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3","description":"The size of the generated image."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."},"seed":{"type":"integer","minimum":0,"maximum":2147483647,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."}},"required":["model","prompt"],"title":"alibaba/wan2.2-t2i-plus"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/wan2.2-t2i-plus", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 1440, "height": 512 }, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/wan2.2-t2i-plus', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1440, height: 512 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "created": 1769537248382, "data": [ { "url": "https://cdn.aimlapi.com/alpaca/1d/6b/20260128/42f0a931/37d29071-c065-46c7-b5f5-42a2796957743542318529.png?Expires=1769623633&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=qCzOS4JIOE9Y3nuZLvlqm3NHBZU%3D" } ], "meta": { "usage": { "credits_used": 105000 } } } ``` {% endcode %}
We obtained the following 1440x512 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-depth-image-to-video.md # Wan 2.2 VACE Fun Depth (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-vace-fun-a14b-depth` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A video generation model that combines a source image, mask, and reference video to produce prompted videos with precise source control. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-vace-fun-a14b-depth"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"negative_prompt":{"type":"string","default":"letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards","description":"The description of elements to avoid in the generated video."},"match_input_num_frames":{"type":"boolean"},"num_frames":{"type":"integer","minimum":17,"maximum":241,"default":81,"description":"Number of frames to generate."},"match_input_frames_per_second":{"type":"boolean","description":"Whether to match the input video's frames per second (FPS)."},"frames_per_second":{"type":"integer","minimum":5,"maximum":30,"default":16,"description":"Frames per second of the generated video."},"resolution":{"type":"string","enum":["480p","580p","720p"],"default":"480p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["auto","16:9","1:1","9:16"],"default":"auto","description":"The aspect ratio of the generated video."},"num_inference_steps":{"type":"integer","default":30,"description":"Number of inference steps for sampling. Higher values give better quality but take longer."},"guidance_scale":{"type":"number","default":5,"description":"Classifier-free guidance scale. Controls prompt adherence / creativity."},"shift":{"type":"number","default":5,"description":"Noise schedule shift parameter. Affects temporal dynamics."},"enable_safety_checker":{"type":"boolean","description":"If set to true, the safety checker will be enabled."},"enable_prompt_expansion":{"type":"boolean","description":"Whether to enable prompt expansion."},"preprocess":{"type":"boolean","description":"Whether to preprocess the input video."},"acceleration":{"type":"string","enum":["none","regular"],"default":"regular","description":"Acceleration to use for inference."},"video_quality":{"type":"string","enum":["low","medium","high","maximum"],"default":"high","description":"The quality of the generated video."},"video_write_mode":{"type":"string","enum":["fast","balanced","small"],"default":"balanced","description":"The method used to write the video."},"num_interpolated_frames":{"type":"integer","description":"Number of frames to interpolate between the original frames."},"temporal_downsample_factor":{"type":"integer","description":"Temporal downsample factor for the video."},"enable_auto_downsample":{"type":"boolean","description":"The minimum frames per second to downsample the video to."},"auto_downsample_min_fps":{"type":"number","default":15,"description":"The minimum frames per second to downsample the video to."},"interpolator_model":{"type":"string","enum":["rife","film"],"default":"film","description":"The model to use for interpolation. Rife, or film are available."},"sync_mode":{"type":"boolean","description":"The synchronization mode for audio and video. Loose or tight are available."}},"required":["model","prompt","video_url"],"title":"alibaba/wan2.2-vace-fun-a14b-depth"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "alibaba/wan2.2-vace-fun-a14b-depth", "video_url": "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", "prompt": "A lone woman strides through the neon-drenched streets of Tokyo at night. Her crimson dress, a vibrant splash of color against the deep blues and blacks of the cityscape, flows slightly with each step. A tailored black jacket, crisp and elegant, contrasts sharply with the dress's rich texture. Medium shot: The city hums around her, blurred lights creating streaks of color in the background. Close-up: The fabric of her dress catches the streetlight's glow, revealing a subtle silk sheen and the intricate stitching at the hem. Her black jacket’s subtle texture is visible – a fine wool perhaps, with a matte finish. The overall mood is one of quiet confidence and mystery, a vibrant woman navigating a bustling, nocturnal landscape. High resolution 4k.", "resolution": "720p", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.2-vace-fun-a14b-depth", prompt: "A lone woman strides through the neon-drenched streets of Tokyo at night. Her crimson dress, a vibrant splash of color against the deep blues and blacks of the cityscape, flows slightly with each step. A tailored black jacket, crisp and elegant, contrasts sharply with the dress's rich texture. Medium shot: The city hums around her, blurred lights creating streaks of color in the background. Close-up: The fabric of her dress catches the streetlight's glow, revealing a subtle silk sheen and the intricate stitching at the hem. Her black jacket’s subtle texture is visible – a fine wool perhaps, with a matte finish. The overall mood is one of quiet confidence and mystery, a vibrant woman navigating a bustling, nocturnal landscape. High resolution 4k.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-depth Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-depth","status":"completed","video":{"url":"https://v3b.fal.media/files/b/rabbit/L3U6CofKB0xe_fgCTKj4G.mp4"}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-inpainting-image-to-video.md # Wan 2.2 VACE Fun Inpainting (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-vace-fun-a14b-inpainting` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A video generation model that combines a source image, mask, and reference video to produce prompted videos with precise source control. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-vace-fun-a14b-inpainting"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"negative_prompt":{"type":"string","default":"letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards","description":"The description of elements to avoid in the generated video."},"match_input_num_frames":{"type":"boolean"},"num_frames":{"type":"integer","minimum":17,"maximum":241,"default":81,"description":"Number of frames to generate."},"match_input_frames_per_second":{"type":"boolean","description":"Whether to match the input video's frames per second (FPS)."},"frames_per_second":{"type":"integer","minimum":5,"maximum":30,"default":16,"description":"Frames per second of the generated video."},"resolution":{"type":"string","enum":["480p","580p","720p"],"default":"480p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["auto","16:9","1:1","9:16"],"default":"auto","description":"The aspect ratio of the generated video."},"num_inference_steps":{"type":"integer","default":30,"description":"Number of inference steps for sampling. Higher values give better quality but take longer."},"guidance_scale":{"type":"number","default":5,"description":"Classifier-free guidance scale. Controls prompt adherence / creativity."},"shift":{"type":"number","default":5,"description":"Noise schedule shift parameter. Affects temporal dynamics."},"enable_safety_checker":{"type":"boolean","description":"If set to true, the safety checker will be enabled."},"enable_prompt_expansion":{"type":"boolean","description":"Whether to enable prompt expansion."},"preprocess":{"type":"boolean","description":"Whether to preprocess the input video."},"acceleration":{"type":"string","enum":["none","regular"],"default":"regular","description":"Acceleration to use for inference."},"video_quality":{"type":"string","enum":["low","medium","high","maximum"],"default":"high","description":"The quality of the generated video."},"video_write_mode":{"type":"string","enum":["fast","balanced","small"],"default":"balanced","description":"The method used to write the video."},"num_interpolated_frames":{"type":"integer","description":"Number of frames to interpolate between the original frames."},"temporal_downsample_factor":{"type":"integer","description":"Temporal downsample factor for the video."},"enable_auto_downsample":{"type":"boolean","description":"The minimum frames per second to downsample the video to."},"auto_downsample_min_fps":{"type":"number","default":15,"description":"The minimum frames per second to downsample the video to."},"interpolator_model":{"type":"string","enum":["rife","film"],"default":"film","description":"The model to use for interpolation. Rife, or film are available."},"sync_mode":{"type":"boolean","description":"The synchronization mode for audio and video. Loose or tight are available."},"image_list":{"type":"array","items":{"type":"string","format":"uri"},"description":"Array of image URLs for multi-image-to-video generation."},"mask_video_url":{"type":"string","format":"uri","description":"URL to the source mask file"}},"required":["model","prompt","video_url"],"title":"alibaba/wan2.2-vace-fun-a14b-inpainting"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "alibaba/wan2.2-vace-fun-a14b-inpainting", "video_url": "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", "prompt": "A lone woman strides through the neon-drenched streets of Tokyo at night. Her crimson dress, a vibrant splash of color against the deep blues and blacks of the cityscape, flows slightly with each step. A tailored black jacket, crisp and elegant, contrasts sharply with the dress's rich texture. Medium shot: The city hums around her, blurred lights creating streaks of color in the background. Close-up: The fabric of her dress catches the streetlight's glow, revealing a subtle silk sheen and the intricate stitching at the hem. Her black jacket’s subtle texture is visible – a fine wool perhaps, with a matte finish. The overall mood is one of quiet confidence and mystery, a vibrant woman navigating a bustling, nocturnal landscape. High resolution 4k.", "resolution": "720p", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.2-vace-fun-a14b-inpainting", prompt: "A lone woman strides through the neon-drenched streets of Tokyo at night. Her crimson dress, a vibrant splash of color against the deep blues and blacks of the cityscape, flows slightly with each step. A tailored black jacket, crisp and elegant, contrasts sharply with the dress's rich texture. Medium shot: The city hums around her, blurred lights creating streaks of color in the background. Close-up: The fabric of her dress catches the streetlight's glow, revealing a subtle silk sheen and the intricate stitching at the hem. Her black jacket’s subtle texture is visible – a fine wool perhaps, with a matte finish. The overall mood is one of quiet confidence and mystery, a vibrant woman navigating a bustling, nocturnal landscape. High resolution 4k.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-inpainting Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-inpainting","status":"completed","video":{"url":"https://v3b.fal.media/files/b/rabbit/L3U6CofKB0xe_fgCTKj4G.mp4"}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-outpainting-image-to-video.md # Wan 2.2 VACE Fun Outpainting (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-vace-fun-a14b-outpainting` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A video generation model that combines a source image, mask, and reference video to produce prompted videos with precise source control. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-vace-fun-a14b-outpainting"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"negative_prompt":{"type":"string","default":"letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards","description":"The description of elements to avoid in the generated video."},"match_input_num_frames":{"type":"boolean"},"num_frames":{"type":"integer","minimum":17,"maximum":241,"default":81,"description":"Number of frames to generate."},"match_input_frames_per_second":{"type":"boolean","description":"Whether to match the input video's frames per second (FPS)."},"frames_per_second":{"type":"integer","minimum":5,"maximum":30,"default":16,"description":"Frames per second of the generated video."},"resolution":{"type":"string","enum":["480p","580p","720p"],"default":"480p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["auto","16:9","1:1","9:16"],"default":"auto","description":"The aspect ratio of the generated video."},"num_inference_steps":{"type":"integer","default":30,"description":"Number of inference steps for sampling. Higher values give better quality but take longer."},"guidance_scale":{"type":"number","default":5,"description":"Classifier-free guidance scale. Controls prompt adherence / creativity."},"shift":{"type":"number","default":5,"description":"Noise schedule shift parameter. Affects temporal dynamics."},"enable_safety_checker":{"type":"boolean","description":"If set to true, the safety checker will be enabled."},"enable_prompt_expansion":{"type":"boolean","description":"Whether to enable prompt expansion."},"preprocess":{"type":"boolean","description":"Whether to preprocess the input video."},"acceleration":{"type":"string","enum":["none","regular"],"default":"regular","description":"Acceleration to use for inference."},"video_quality":{"type":"string","enum":["low","medium","high","maximum"],"default":"high","description":"The quality of the generated video."},"video_write_mode":{"type":"string","enum":["fast","balanced","small"],"default":"balanced","description":"The method used to write the video."},"num_interpolated_frames":{"type":"integer","description":"Number of frames to interpolate between the original frames."},"temporal_downsample_factor":{"type":"integer","description":"Temporal downsample factor for the video."},"enable_auto_downsample":{"type":"boolean","description":"The minimum frames per second to downsample the video to."},"auto_downsample_min_fps":{"type":"number","default":15,"description":"The minimum frames per second to downsample the video to."},"interpolator_model":{"type":"string","enum":["rife","film"],"default":"film","description":"The model to use for interpolation. Rife, or film are available."},"sync_mode":{"type":"boolean","description":"The synchronization mode for audio and video. Loose or tight are available."},"expand_left":{"type":"boolean","default":true,"description":"Whether to expand the video to the left"},"expand_right":{"type":"boolean","default":true,"description":"Whether to expand the video to the right"},"expand_top":{"type":"boolean","default":true,"description":"Whether to expand the video to the top"},"expand_bottom":{"type":"boolean","default":true,"description":"Whether to expand the video to the bottom"},"expand_ratio":{"type":"number","default":0.25,"description":"Amount of expansion. This is a float value between 0 and 1, where 0.25 adds 25% to the original video size on the specified sides"}},"required":["model","prompt","video_url"],"title":"alibaba/wan2.2-vace-fun-a14b-outpainting"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "alibaba/wan2.2-vace-fun-a14b-outpainting", "video_url": "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", "prompt": "A lone woman strides through the neon-drenched streets of Tokyo at night. Her crimson dress, a vibrant splash of color against the deep blues and blacks of the cityscape, flows slightly with each step. A tailored black jacket, crisp and elegant, contrasts sharply with the dress's rich texture. Medium shot: The city hums around her, blurred lights creating streaks of color in the background. Close-up: The fabric of her dress catches the streetlight's glow, revealing a subtle silk sheen and the intricate stitching at the hem. Her black jacket’s subtle texture is visible – a fine wool perhaps, with a matte finish. The overall mood is one of quiet confidence and mystery, a vibrant woman navigating a bustling, nocturnal landscape. High resolution 4k.", "resolution": "720p", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.2-vace-fun-a14b-outpainting", prompt: "A lone woman strides through the neon-drenched streets of Tokyo at night. Her crimson dress, a vibrant splash of color against the deep blues and blacks of the cityscape, flows slightly with each step. A tailored black jacket, crisp and elegant, contrasts sharply with the dress's rich texture. Medium shot: The city hums around her, blurred lights creating streaks of color in the background. Close-up: The fabric of her dress catches the streetlight's glow, revealing a subtle silk sheen and the intricate stitching at the hem. Her black jacket’s subtle texture is visible – a fine wool perhaps, with a matte finish. The overall mood is one of quiet confidence and mystery, a vibrant woman navigating a bustling, nocturnal landscape. High resolution 4k.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-outpainting Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-outpainting","status":"completed","video":{"url":"https://v3b.fal.media/files/b/rabbit/L3U6CofKB0xe_fgCTKj4G.mp4"}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-pose-image-to-video.md # Wan 2.2 VACE Fun Pose (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-vace-fun-a14b-pose` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A video generation model that combines a source image, mask, and reference video to produce prompted videos with precise source control. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-vace-fun-a14b-pose"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"negative_prompt":{"type":"string","default":"letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards","description":"The description of elements to avoid in the generated video."},"match_input_num_frames":{"type":"boolean"},"num_frames":{"type":"integer","minimum":17,"maximum":241,"default":81,"description":"Number of frames to generate."},"match_input_frames_per_second":{"type":"boolean","description":"Whether to match the input video's frames per second (FPS)."},"frames_per_second":{"type":"integer","minimum":5,"maximum":30,"default":16,"description":"Frames per second of the generated video."},"resolution":{"type":"string","enum":["480p","580p","720p"],"default":"480p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["auto","16:9","1:1","9:16"],"default":"auto","description":"The aspect ratio of the generated video."},"num_inference_steps":{"type":"integer","default":30,"description":"Number of inference steps for sampling. Higher values give better quality but take longer."},"guidance_scale":{"type":"number","default":5,"description":"Classifier-free guidance scale. Controls prompt adherence / creativity."},"shift":{"type":"number","default":5,"description":"Noise schedule shift parameter. Affects temporal dynamics."},"enable_safety_checker":{"type":"boolean","description":"If set to true, the safety checker will be enabled."},"enable_prompt_expansion":{"type":"boolean","description":"Whether to enable prompt expansion."},"preprocess":{"type":"boolean","description":"Whether to preprocess the input video."},"acceleration":{"type":"string","enum":["none","regular"],"default":"regular","description":"Acceleration to use for inference."},"video_quality":{"type":"string","enum":["low","medium","high","maximum"],"default":"high","description":"The quality of the generated video."},"video_write_mode":{"type":"string","enum":["fast","balanced","small"],"default":"balanced","description":"The method used to write the video."},"num_interpolated_frames":{"type":"integer","description":"Number of frames to interpolate between the original frames."},"temporal_downsample_factor":{"type":"integer","description":"Temporal downsample factor for the video."},"enable_auto_downsample":{"type":"boolean","description":"The minimum frames per second to downsample the video to."},"auto_downsample_min_fps":{"type":"number","default":15,"description":"The minimum frames per second to downsample the video to."},"interpolator_model":{"type":"string","enum":["rife","film"],"default":"film","description":"The model to use for interpolation. Rife, or film are available."},"sync_mode":{"type":"boolean","description":"The synchronization mode for audio and video. Loose or tight are available."}},"required":["model","prompt","video_url"],"title":"alibaba/wan2.2-vace-fun-a14b-pose"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "alibaba/wan2.2-vace-fun-a14b-pose", "video_url": "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", "prompt": "A lone woman strides through the neon-drenched streets of Tokyo at night. Her crimson dress, a vibrant splash of color against the deep blues and blacks of the cityscape, flows slightly with each step. A tailored black jacket, crisp and elegant, contrasts sharply with the dress's rich texture. Medium shot: The city hums around her, blurred lights creating streaks of color in the background. Close-up: The fabric of her dress catches the streetlight's glow, revealing a subtle silk sheen and the intricate stitching at the hem. Her black jacket’s subtle texture is visible – a fine wool perhaps, with a matte finish. The overall mood is one of quiet confidence and mystery, a vibrant woman navigating a bustling, nocturnal landscape. High resolution 4k.", "resolution": "720p", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.2-vace-fun-a14b-pose", prompt: "A lone woman strides through the neon-drenched streets of Tokyo at night. Her crimson dress, a vibrant splash of color against the deep blues and blacks of the cityscape, flows slightly with each step. A tailored black jacket, crisp and elegant, contrasts sharply with the dress's rich texture. Medium shot: The city hums around her, blurred lights creating streaks of color in the background. Close-up: The fabric of her dress catches the streetlight's glow, revealing a subtle silk sheen and the intricate stitching at the hem. Her black jacket’s subtle texture is visible – a fine wool perhaps, with a matte finish. The overall mood is one of quiet confidence and mystery, a vibrant woman navigating a bustling, nocturnal landscape. High resolution 4k.", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-pose Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-pose","status":"completed","video":{"url":"https://v3b.fal.media/files/b/rabbit/L3U6CofKB0xe_fgCTKj4G.mp4"}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/video-models/alibaba-cloud/wan2.2-vace-fun-a14b-reframe-image-to-video.md # Wan 2.2 VACE Fun Reframe (Image-to-Video) {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.2-vace-fun-a14b-reframe` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} A video generation model that combines a source image, mask, and reference video to produce prompted videos with precise source control. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## How to Make a Call
Step-by-Step Instructions Generating a video using this model involves sequentially calling two endpoints: * The first one is for creating and sending a video generation task to the server (returns a generation ID). * The second one is for requesting the generated video from the server using the generation ID received from the first endpoint. Below, you can find two corresponding API schemas and examples for both endpoint calls.
## API Schemas ### Video Generation This endpoint creates and sends a video generation task to the server — and returns a generation ID. ## POST /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v2/video/generations":{"post":{"operationId":"_v2_video_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.2-vace-fun-a14b-reframe"]},"prompt":{"type":"string","description":"The text description of the scene, subject, or action to generate in the video."},"video_url":{"type":"string","format":"uri","description":"A HTTPS URL pointing to a video or a data URI containing a video. This video will be used as a reference during generation."},"negative_prompt":{"type":"string","default":"letterboxing, borders, black bars, bright colors, overexposed, static, blurred details, subtitles, style, artwork, painting, picture, still, overall gray, worst quality, low quality, JPEG compression residue, ugly, incomplete, extra fingers, poorly drawn hands, poorly drawn faces, deformed, disfigured, malformed limbs, fused fingers, still picture, cluttered background, three legs, many people in the background, walking backwards","description":"The description of elements to avoid in the generated video."},"match_input_num_frames":{"type":"boolean"},"num_frames":{"type":"integer","minimum":17,"maximum":241,"default":81,"description":"Number of frames to generate."},"match_input_frames_per_second":{"type":"boolean","description":"Whether to match the input video's frames per second (FPS)."},"frames_per_second":{"type":"integer","minimum":5,"maximum":30,"default":16,"description":"Frames per second of the generated video."},"resolution":{"type":"string","enum":["480p","580p","720p"],"default":"480p","description":"The resolution of the output video, where the number refers to the short side in pixels."},"aspect_ratio":{"type":"string","enum":["auto","16:9","1:1","9:16"],"default":"auto","description":"The aspect ratio of the generated video."},"num_inference_steps":{"type":"integer","default":30,"description":"Number of inference steps for sampling. Higher values give better quality but take longer."},"guidance_scale":{"type":"number","default":5,"description":"Classifier-free guidance scale. Controls prompt adherence / creativity."},"shift":{"type":"number","default":5,"description":"Noise schedule shift parameter. Affects temporal dynamics."},"enable_safety_checker":{"type":"boolean","description":"If set to true, the safety checker will be enabled."},"enable_prompt_expansion":{"type":"boolean","description":"Whether to enable prompt expansion."},"preprocess":{"type":"boolean","description":"Whether to preprocess the input video."},"acceleration":{"type":"string","enum":["none","regular"],"default":"regular","description":"Acceleration to use for inference."},"video_quality":{"type":"string","enum":["low","medium","high","maximum"],"default":"high","description":"The quality of the generated video."},"video_write_mode":{"type":"string","enum":["fast","balanced","small"],"default":"balanced","description":"The method used to write the video."},"num_interpolated_frames":{"type":"integer","description":"Number of frames to interpolate between the original frames."},"temporal_downsample_factor":{"type":"integer","description":"Temporal downsample factor for the video."},"enable_auto_downsample":{"type":"boolean","description":"The minimum frames per second to downsample the video to."},"auto_downsample_min_fps":{"type":"number","default":15,"description":"The minimum frames per second to downsample the video to."},"interpolator_model":{"type":"string","enum":["rife","film"],"default":"film","description":"The model to use for interpolation. Rife, or film are available."},"sync_mode":{"type":"boolean","description":"The synchronization mode for audio and video. Loose or tight are available."},"zoom_factor":{"type":"number","description":"Zoom factor for the video. When this value is greater than 0, the video will be zoomed in by this factor (in relation to the canvas size,) cutting off the edges of the video. A value of 0 means no zoom"},"trim_borders":{"type":"boolean","default":true,"description":"Whether to trim borders from the video"}},"required":["model","video_url"],"title":"alibaba/wan2.2-vace-fun-a14b-reframe"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ### Fetch the video After sending a request for video generation, this task is added to the queue. This endpoint lets you check the status of a video generation task using its `id`, obtained from the endpoint described above.\ If the video generation task status is `completed`, the response will include the final result — with the generated video URL and additional metadata. ## GET /v2/video/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key","in":"header"}}},"paths":{"/v2/video/generations":{"get":{"operationId":"_v2_video_generations","parameters":[{"name":"generation_id","required":true,"in":"query","schema":{"type":"string"}}],"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"id":{"type":"string","description":"The ID of the generated video."},"status":{"type":"string","enum":["queued","generating","completed","error"],"description":"The current status of the generation task."},"video":{"type":"object","nullable":true,"properties":{"url":{"type":"string","format":"uri","description":"The URL where the file can be downloaded from."}},"required":["url"]},"error":{"type":"object","nullable":true,"properties":{"name":{"type":"string"},"message":{"type":"string"}},"required":["name","message"],"description":"Description of the error, if any."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}},"required":["id","status"]}}}}}}}}} ``` ## Full Example: Generating and Retrieving the Video From the Server The code below creates a video generation task, then automatically polls the server every **10** seconds until it finally receives the video URL. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import time # replace with your actual AI/ML API key api_key = "" base_url = "https://api.aimlapi.com/v2" # Creating and sending a video generation task to the server def generate_video(): url = f"{base_url}/video/generations" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "alibaba/wan2.2-vace-fun-a14b-reframe", "video_url": "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", "image_url": "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", "resolution": "720p", } response = requests.post(url, json=data, headers=headers) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_video(gen_id): url = f"{base_url}/video/generations" params = { "generation_id": gen_id, } headers = { "Authorization": f"Bearer {api_key}", "Content-Type": "application/json" } response = requests.get(url, params=params, headers=headers) return response.json() def main(): # Running video generation and getting a task id gen_response = generate_video() gen_id = gen_response.get("id") print("Generation ID: ", gen_id) # Trying to retrieve the video from the server every 10 sec if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_video(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") print("Status:", status) if status == "waiting" or status == "active" or status == "queued" or status == "generating": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JavaScript" %} {% code overflow="wrap" %} ```javascript const https = require("https"); const { URL } = require("url"); // Replace with your actual AI/ML API key const apiKey = ""; const baseUrl = "https://api.aimlapi.com/v2"; // Creating and sending a video generation task to the server function generateVideo(callback) { const data = JSON.stringify({ model: "alibaba/wan2.2-vace-fun-a14b-reframe", video_url: "https://storage.googleapis.com/falserverless/example_inputs/wan_animate_input_video.mp4", image_url: "https://s2-111386.kwimgs.com/bs2/mmu-aiplatform-temp/kling/20240620/1.jpeg", resolution: "720p", }); const url = new URL(`${baseUrl}/video/generations`); const options = { method: "POST", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", "Content-Length": Buffer.byteLength(data), }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { if (res.statusCode >= 400) { console.error(`Error: ${res.statusCode} - ${body}`); callback(null); } else { const parsed = JSON.parse(body); callback(parsed); } }); }); req.on("error", (err) => console.error("Request error:", err)); req.write(data); req.end(); } // Requesting the result of the task from the server using the generation_id function getVideo(genId, callback) { const url = new URL(`${baseUrl}/video/generations`); url.searchParams.append("generation_id", genId); const options = { method: "GET", headers: { "Authorization": `Bearer ${apiKey}`, "Content-Type": "application/json", }, }; const req = https.request(url, options, (res) => { let body = ""; res.on("data", (chunk) => body += chunk); res.on("end", () => { const parsed = JSON.parse(body); callback(parsed); }); }); req.on("error", (err) => console.error("Request error:", err)); req.end(); } // Initiates video generation and checks the status every 10 seconds until completion or timeout function main() { generateVideo((genResponse) => { if (!genResponse || !genResponse.id) { console.error("Failed to start generation"); return; } const genId = genResponse.id; console.log("Gen_ID:", genId); const startTime = Date.now(); const timeout = 600000; const checkStatus = () => { if (Date.now() - startTime > timeout) { console.log("Timeout reached. Stopping."); return; } getVideo(genId, (responseData) => { if (!responseData) { console.error("Error: No response from API"); return; } const status = responseData.status; console.log("Status:", status); if (["waiting", "active", "queued", "generating"].includes(status)) { console.log("Still waiting... Checking again in 10 seconds."); setTimeout(checkStatus, 10000); } else { console.log("Processing complete:\n", responseData); } }); }; checkStatus(); }); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 Generation ID: b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-reframe Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: generating Still waiting... Checking again in 10 seconds. Status: completed Processing complete:\n {"id":"b5592d70-dd31-4e5a-bc5c-5063660c001b:alibaba/wan2.2-vace-fun-a14b-reframe","status":"completed","video":{"url":"https://v3b.fal.media/files/b/rabbit/L3U6CofKB0xe_fgCTKj4G.mp4"}} ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/wan2.5-t2i-preview.md # wan2.5-t2i-preview {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/wan2.5-t2i-preview` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview A text-to-image model capable of producing high-fidelity, visually accurate images. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/wan2.5-t2i-preview"]},"prompt":{"type":"string","maxLength":2000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"negative_prompt":{"type":"string","maxLength":500,"description":"The description of elements to avoid in the generated image."},"num_images":{"type":"integer","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":1440},"height":{"type":"integer","minimum":512,"maximum":1440}},"required":["width","height"],"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3","description":"The size of the generated image."},"enhance_prompt":{"type":"boolean","default":true,"description":"Optional parameter to use an LLM-based prompt rewriting feature for higher-quality images that better match the original prompt. Disabling it may affect image quality and prompt alignment."},"watermark":{"type":"boolean","default":false,"description":"Add an invisible watermark to the generated images."},"seed":{"type":"integer","minimum":0,"maximum":2147483647,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."}},"required":["model","prompt"],"title":"alibaba/wan2.5-t2i-preview"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/wan2.5-t2i-preview", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 1440, "height": 512 }, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/wan2.5-t2i-preview', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1440, height: 512 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "created": 1769537446493, "data": [ { "url": "https://cdn.aimlapi.com/alpaca/1d/b0/20260128/f599ada6/50628345-4c1440df-86b3-464b-89f6-3c7b19611f6e.png?Expires=1769623835&OSSAccessKeyId=LTAI5tRcsWJEymQaTsKbKqGf&Signature=CWLemI3Pnxxa9XzWGHblYHv9tvg%3D" } ], "meta": { "usage": { "credits_used": 63000 } } } ``` {% endcode %}
We obtained the following 1440x512 image by running this code example:

"A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses."

--- # Source: https://docs.aimlapi.com/capabilities/web-search.md # Web Search ## Overview This capability of text chat models allows them to send search queries to the web, retrieve relevant content, and use it to generate more accurate and up-to-date responses—particularly useful for recent events or less common topics. ## Models That Support Web Search * [gpt-4o-search-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-search-preview) * [gpt-4o-mini-search-preview](https://docs.aimlapi.com/api-references/text-models-llm/openai/gpt-4o-mini-search-preview) * [moonshot/kimi-k2-preview](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-preview) * [moonshot/kimi-k2-0905-preview](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-preview) * [moonshot/kimi-k2-turbo-preview](https://docs.aimlapi.com/api-references/text-models-llm/moonshot/kimi-k2-turbo-preview) * [perplexity/sonar](https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar) * [perplexity/sonar-pro](https://docs.aimlapi.com/api-references/text-models-llm/perplexity/sonar-pro) ## Solutions That Support Web Search * [AI Search Engine](https://docs.aimlapi.com/solutions/bagoodex/ai-search-engine) --- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai/whisper-base.md # whisper-base {% hint style="info" %} This documentation is valid for the following list of our models: * `#g1_whisper-base` {% endhint %} {% hint style="success" %} Note: Previously, our STT models operated via a single API call to POST `https://api.aimlapi.com/v1/stt`. You can view the API schema [here](https://docs.aimlapi.com/api-references/speech-models/speech-to-text/stt-legacy). Now, we are switching to a new two-step process: * `POST https://api.aimlapi.com/v1/stt/create` – Creates and submits a speech-to-text processing task to the server. This method accepts the same parameters as the old version but returns a `generation_id` instead of the final transcript. * `GET https://api.aimlapi.com/v1/stt/{generation_id}` – Retrieves the generated transcript from the server using the `generation_id` obtained from the previous API call. This approach helps prevent generation failures due to timeouts.\ We've prepared [a couple of examples](#quick-code-examples) below to make the transition to the new STT API easier for you. {% endhint %} ## Model Overview The Whisper models are primarily for AI research, focusing on model robustness, generalization, and biases, and are also effective for English speech recognition. The use of Whisper models for transcribing non-consensual recordings or in high-risk decision-making contexts is strongly discouraged due to potential inaccuracies and ethical concerns. The models are trained using 680,000 hours of audio and corresponding transcripts from the internet, with 65% being English audio and transcripts, 18% non-English audio with English transcripts, and 17% non-English audio with matching non-English transcripts, covering 98 languages in total. {% hint style="success" %} OpenAI STT models are priced based on tokens, similar to chat models. In practice, this means the cost primarily depends on the duration of the input audio. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["#g1_whisper-base"]},"custom_intent":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom intent you want the model to detect within your input audio if present. Submit up to 100."},"custom_topic":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom topic you want the model to detect within your input audio if present. Submit up to 100."},"custom_intent_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_intent param. When strict, the model will only return intents submitted using the custom_intent param. When extended, the model will return its own detected intents in addition those submitted using the custom_intents param."},"custom_topic_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_topic param. When strict, the model will only return topics submitted using the custom_topic param. When extended, the model will return its own detected topics in addition to those submitted using the custom_topic param."},"detect_language":{"type":"boolean","description":"Enables language detection to identify the dominant language spoken in the submitted audio."},"detect_entities":{"type":"boolean","description":"When Entity Detection is enabled, the Punctuation feature will be enabled by default."},"detect_topics":{"type":"boolean","description":"Detects the most important and relevant topics that are referenced in speech within the audio."},"diarize":{"type":"boolean","description":"Recognizes speaker changes. Each word in the transcript will be assigned a speaker number starting at 0."},"dictation":{"type":"boolean","description":"Identifies and extracts key entities from content in submitted audio."},"diarize_version":{"type":"string","description":""},"extra":{"type":"string","description":"Arbitrary key-value pairs that are attached to the API response for usage in downstream processing."},"filler_words":{"type":"boolean","description":"Filler Words can help transcribe interruptions in your audio, like “uh” and “um”."},"intents":{"type":"boolean","description":"Recognizes speaker intent throughout a transcript or text."},"keywords":{"type":"string","description":"Keywords can boost or suppress specialized terminology and brands."},"language":{"type":"string","description":"The BCP-47 language tag that hints at the primary spoken language. Depending on the Model and API endpoint you choose only certain languages are available"},"measurements":{"type":"boolean","description":"Spoken measurements will be converted to their corresponding abbreviations"},"multi_channel":{"type":"boolean","description":"Transcribes each audio channel independently"},"numerals":{"type":"boolean","description":"Numerals converts numbers from written format to numerical format"},"paragraphs":{"type":"boolean","description":"Splits audio into paragraphs to improve transcript readability"},"profanity_filter":{"type":"boolean","description":"Profanity Filter looks for recognized profanity and converts it to the nearest recognized non-profane word or removes it from the transcript completely"},"punctuate":{"type":"boolean","description":"Adds punctuation and capitalization to the transcript"},"search":{"type":"string","description":"Search for terms or phrases in submitted audio"},"sentiment":{"type":"boolean","description":"Recognizes the sentiment throughout a transcript or text"},"smart_format":{"type":"boolean","description":"Applies formatting to transcript output. When set to true, additional formatting will be applied to transcripts to improve readability"},"summarize":{"type":"string","description":"Summarizes content. For Listen API, supports string version option. For Read API, accepts boolean only."},"tag":{"type":"array","items":{"type":"string"},"description":"Labels your requests for the purpose of identification during usage reporting"},"topics":{"type":"boolean","description":"Detects topics throughout a transcript or text"},"utterances":{"type":"boolean","description":"Segments speech into meaningful semantic units"},"utt_split":{"type":"number","description":"Seconds to wait before detecting a pause between words in submitted audio"},"url":{"type":"string","format":"uri"}},"required":["model","url"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Quick Code Examples Let's use the `#g1_whisper-base` model to transcribe the following audio fragment: {% embed url="" %} ### Example #1: Processing a Speech Audio File via URL
import time
import requests

base_url = "https://api.aimlapi.com/v1"
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
api_key = "<YOUR_AIMLAPI_KEY>"

# Creating and sending a speech-to-text conversion task to the server
def create_stt():
    url = f"{base_url}/stt/create"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }

    data = {
        "model": "#g1_whisper-base",
        "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3"
    }
 
    response = requests.post(url, json=data, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        print(response_data)
        return response_data

# Requesting the result of the task from the server using the generation_id
def get_stt(gen_id):
    url = f"{base_url}/stt/{gen_id}"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }
    response = requests.get(url, headers=headers)
    return response.json()
    
# First, start the generation, then repeatedly request the result from the server every 10 seconds.
def main():
    stt_response = create_stt()
    gen_id = stt_response.get("generation_id")


    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = get_stt(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")

            if status == "waiting" or status == "active":
                print("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"])
                return response_data
   
        print("Timeout reached. Stopping.")
        return None     


if __name__ == "__main__":
    main()

Response {% code overflow="wrap" %} ``` {'generation_id': 'h66460ba-0562-1dd9-b440-a56d947e72a3'} Processing complete: He doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he persona from this stage to you be fine ``` {% endcode %}
### Example #2: Processing a Speech Audio File via File Path {% code overflow="wrap" %} ```python import time import requests base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "#g1_whisper-base", } with open("stt-sample.mp3", "rb") as file: files = {"audio": ("sample.mp3", file, "audio/mpeg")} response = requests.post(url, data=data, headers=headers, files=files) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"]) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ``` {'generation_id': 'e3d46bba-7562-44a9-b440-504d940342a3'} Processing complete: He doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he persona from this stage to you be fine ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai/whisper-large.md # whisper-large {% hint style="info" %} This documentation is valid for the following list of our models: * `#g1_whisper-large` {% endhint %} {% hint style="success" %} Note: Previously, our STT models operated via a single API call to `POST https://api.aimlapi.com/v1/stt`. You can view the API schema [here](https://docs.aimlapi.com/api-references/speech-models/speech-to-text/stt-legacy). Now, we are switching to a new two-step process: * `POST https://api.aimlapi.com/v1/stt/create` – Creates and submits a speech-to-text processing task to the server. This method accepts the same parameters as the old version but returns a `generation_id` instead of the final transcript. * `GET https://api.aimlapi.com/v1/stt/{generation_id}` – Retrieves the generated transcript from the server using the `generation_id` obtained from the previous API call. This approach helps prevent generation failures due to timeouts.\ We've prepared [a couple of examples](#quick-code-examples) below to make the transition to the new STT API easier for you. {% endhint %} ## Model Overview The Whisper models are primarily for AI research, focusing on model robustness, generalization, and biases, and are also effective for English speech recognition. The use of Whisper models for transcribing non-consensual recordings or in high-risk decision-making contexts is strongly discouraged due to potential inaccuracies and ethical concerns. The models are trained using 680,000 hours of audio and corresponding transcripts from the internet, with 65% being English audio and transcripts, 18% non-English audio with English transcripts, and 17% non-English audio with matching non-English transcripts, covering 98 languages in total. {% hint style="success" %} OpenAI STT models are priced based on tokens, similar to chat models. In practice, this means the cost primarily depends on the duration of the input audio. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["#g1_whisper-large"]},"custom_intent":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom intent you want the model to detect within your input audio if present. Submit up to 100."},"custom_topic":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom topic you want the model to detect within your input audio if present. Submit up to 100."},"custom_intent_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_intent param. When strict, the model will only return intents submitted using the custom_intent param. When extended, the model will return its own detected intents in addition those submitted using the custom_intents param."},"custom_topic_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_topic param. When strict, the model will only return topics submitted using the custom_topic param. When extended, the model will return its own detected topics in addition to those submitted using the custom_topic param."},"detect_language":{"type":"boolean","description":"Enables language detection to identify the dominant language spoken in the submitted audio."},"detect_entities":{"type":"boolean","description":"When Entity Detection is enabled, the Punctuation feature will be enabled by default."},"detect_topics":{"type":"boolean","description":"Detects the most important and relevant topics that are referenced in speech within the audio."},"diarize":{"type":"boolean","description":"Recognizes speaker changes. Each word in the transcript will be assigned a speaker number starting at 0."},"dictation":{"type":"boolean","description":"Identifies and extracts key entities from content in submitted audio."},"diarize_version":{"type":"string","description":""},"extra":{"type":"string","description":"Arbitrary key-value pairs that are attached to the API response for usage in downstream processing."},"filler_words":{"type":"boolean","description":"Filler Words can help transcribe interruptions in your audio, like “uh” and “um”."},"intents":{"type":"boolean","description":"Recognizes speaker intent throughout a transcript or text."},"keywords":{"type":"string","description":"Keywords can boost or suppress specialized terminology and brands."},"language":{"type":"string","description":"The BCP-47 language tag that hints at the primary spoken language. Depending on the Model and API endpoint you choose only certain languages are available"},"measurements":{"type":"boolean","description":"Spoken measurements will be converted to their corresponding abbreviations"},"multi_channel":{"type":"boolean","description":"Transcribes each audio channel independently"},"numerals":{"type":"boolean","description":"Numerals converts numbers from written format to numerical format"},"paragraphs":{"type":"boolean","description":"Splits audio into paragraphs to improve transcript readability"},"profanity_filter":{"type":"boolean","description":"Profanity Filter looks for recognized profanity and converts it to the nearest recognized non-profane word or removes it from the transcript completely"},"punctuate":{"type":"boolean","description":"Adds punctuation and capitalization to the transcript"},"search":{"type":"string","description":"Search for terms or phrases in submitted audio"},"sentiment":{"type":"boolean","description":"Recognizes the sentiment throughout a transcript or text"},"smart_format":{"type":"boolean","description":"Applies formatting to transcript output. When set to true, additional formatting will be applied to transcripts to improve readability"},"summarize":{"type":"string","description":"Summarizes content. For Listen API, supports string version option. For Read API, accepts boolean only."},"tag":{"type":"array","items":{"type":"string"},"description":"Labels your requests for the purpose of identification during usage reporting"},"topics":{"type":"boolean","description":"Detects topics throughout a transcript or text"},"utterances":{"type":"boolean","description":"Segments speech into meaningful semantic units"},"utt_split":{"type":"number","description":"Seconds to wait before detecting a pause between words in submitted audio"},"url":{"type":"string","format":"uri"}},"required":["model","url"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Quick Code Examples Let's use the `#g1_whisper-large` model to transcribe the following audio fragment: {% embed url="" %} ### Example #1: Processing a Speech Audio File via URL
import time
import requests

base_url = "https://api.aimlapi.com/v1"
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
api_key = "<YOUR_AIMLAPI_KEY>"

# Creating and sending a speech-to-text conversion task to the server
def create_stt():
    url = f"{base_url}/stt/create"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }

    data = {
        "model": "#g1_whisper-large",
        "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3"
    }
 
    response = requests.post(url, json=data, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        print(response_data)
        return response_data

# Requesting the result of the task from the server using the generation_id
def get_stt(gen_id):
    url = f"{base_url}/stt/{gen_id}"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }
    response = requests.get(url, headers=headers)
    return response.json()
    
# First, start the generation, then repeatedly request the result from the server every 10 seconds.
def main():
    stt_response = create_stt()
    gen_id = stt_response.get("generation_id")


    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = get_stt(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")
            if status == "waiting" or status == "active":
                print("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"])
                return response_data
   
        print("Timeout reached. Stopping.")
        return None     


if __name__ == "__main__":
    main()
Response {% code overflow="wrap" %} ``` {'generation_id': 'e3d46bba-7562-44a9-b440-504d940342a3'} Processing complete: he doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he personified from this stage to you be fire ``` {% endcode %}
### Example #2: Processing a Speech Audio File via File Path {% code overflow="wrap" %} ```python import time import requests base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "#g1_whisper-large", } with open("stt-sample.mp3", "rb") as file: files = {"audio": ("sample.mp3", file, "audio/mpeg")} response = requests.post(url, data=data, headers=headers, files=files) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"]) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ``` {'generation_id': 'dd412e9d-044c-43ae-b97b-e920755074d5'} Processing complete: he doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he personified from this stage to you be fire ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai/whisper-medium.md # whisper-medium {% hint style="info" %} This documentation is valid for the following list of our models: * `#g1_whisper-medium` {% endhint %} {% hint style="success" %} Note: Previously, our STT models operated via a single API call to `POST https://api.aimlapi.com/v1/stt`. You can view the API schema [here](https://docs.aimlapi.com/api-references/speech-models/speech-to-text/stt-legacy). Now, we are switching to a new two-step process: * `POST https://api.aimlapi.com/v1/stt/create` – Creates and submits a speech-to-text processing task to the server. This method accepts the same parameters as the old version but returns a `generation_id` instead of the final transcript. * `GET https://api.aimlapi.com/v1/stt/{generation_id}` – Retrieves the generated transcript from the server using the `generation_id` obtained from the previous API call. This approach helps prevent generation failures due to timeouts.\ We've prepared [a couple of examples](#quick-code-examples) below to make the transition to the new STT API easier for you. {% endhint %} ## Model Overview The Whisper models are primarily for AI research, focusing on model robustness, generalization, and biases, and are also effective for English speech recognition. The use of Whisper models for transcribing non-consensual recordings or in high-risk decision-making contexts is strongly discouraged due to potential inaccuracies and ethical concerns. The models are trained using 680,000 hours of audio and corresponding transcripts from the internet, with 65% being English audio and transcripts, 18% non-English audio with English transcripts, and 17% non-English audio with matching non-English transcripts, covering 98 languages in total. {% hint style="success" %} OpenAI STT models are priced based on tokens, similar to chat models. In practice, this means the cost primarily depends on the duration of the input audio. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["#g1_whisper-medium"]},"custom_intent":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom intent you want the model to detect within your input audio if present. Submit up to 100."},"custom_topic":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom topic you want the model to detect within your input audio if present. Submit up to 100."},"custom_intent_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_intent param. When strict, the model will only return intents submitted using the custom_intent param. When extended, the model will return its own detected intents in addition those submitted using the custom_intents param."},"custom_topic_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_topic param. When strict, the model will only return topics submitted using the custom_topic param. When extended, the model will return its own detected topics in addition to those submitted using the custom_topic param."},"detect_language":{"type":"boolean","description":"Enables language detection to identify the dominant language spoken in the submitted audio."},"detect_entities":{"type":"boolean","description":"When Entity Detection is enabled, the Punctuation feature will be enabled by default."},"detect_topics":{"type":"boolean","description":"Detects the most important and relevant topics that are referenced in speech within the audio."},"diarize":{"type":"boolean","description":"Recognizes speaker changes. Each word in the transcript will be assigned a speaker number starting at 0."},"dictation":{"type":"boolean","description":"Identifies and extracts key entities from content in submitted audio."},"diarize_version":{"type":"string","description":""},"extra":{"type":"string","description":"Arbitrary key-value pairs that are attached to the API response for usage in downstream processing."},"filler_words":{"type":"boolean","description":"Filler Words can help transcribe interruptions in your audio, like “uh” and “um”."},"intents":{"type":"boolean","description":"Recognizes speaker intent throughout a transcript or text."},"keywords":{"type":"string","description":"Keywords can boost or suppress specialized terminology and brands."},"language":{"type":"string","description":"The BCP-47 language tag that hints at the primary spoken language. Depending on the Model and API endpoint you choose only certain languages are available"},"measurements":{"type":"boolean","description":"Spoken measurements will be converted to their corresponding abbreviations"},"multi_channel":{"type":"boolean","description":"Transcribes each audio channel independently"},"numerals":{"type":"boolean","description":"Numerals converts numbers from written format to numerical format"},"paragraphs":{"type":"boolean","description":"Splits audio into paragraphs to improve transcript readability"},"profanity_filter":{"type":"boolean","description":"Profanity Filter looks for recognized profanity and converts it to the nearest recognized non-profane word or removes it from the transcript completely"},"punctuate":{"type":"boolean","description":"Adds punctuation and capitalization to the transcript"},"search":{"type":"string","description":"Search for terms or phrases in submitted audio"},"sentiment":{"type":"boolean","description":"Recognizes the sentiment throughout a transcript or text"},"smart_format":{"type":"boolean","description":"Applies formatting to transcript output. When set to true, additional formatting will be applied to transcripts to improve readability"},"summarize":{"type":"string","description":"Summarizes content. For Listen API, supports string version option. For Read API, accepts boolean only."},"tag":{"type":"array","items":{"type":"string"},"description":"Labels your requests for the purpose of identification during usage reporting"},"topics":{"type":"boolean","description":"Detects topics throughout a transcript or text"},"utterances":{"type":"boolean","description":"Segments speech into meaningful semantic units"},"utt_split":{"type":"number","description":"Seconds to wait before detecting a pause between words in submitted audio"},"url":{"type":"string","format":"uri"}},"required":["model","url"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Quick Code Examples Let's use the `#g1_whisper-medium` model to transcribe the following audio fragment: {% embed url="" %} ### Example #1: Processing a Speech Audio File via URL
import time
import requests

base_url = "https://api.aimlapi.com/v1"
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
api_key = "<YOUR_AIMLAPI_KEY>"

# Creating and sending a speech-to-text conversion task to the server
def create_stt():
    url = f"{base_url}/stt/create"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }

    data = {
        "model": "#g1_whisper-medium",
        "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3"
    }
 
    response = requests.post(url, json=data, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        print(response_data)
        return response_data

# Requesting the result of the task from the server using the generation_id
def get_stt(gen_id):
    url = f"{base_url}/stt/{gen_id}"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }
    response = requests.get(url, headers=headers)
    return response.json()
    
# First, start the generation, then repeatedly request the result from the server every 10 seconds.
def main():
    stt_response = create_stt()
    gen_id = stt_response.get("generation_id")


    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = get_stt(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")

            if status == "waiting" or status == "active":
                ("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"])
                return response_data
   
        print("Timeout reached. Stopping.")
        return None     


if __name__ == "__main__":
    main()

Response {% code overflow="wrap" %} ``` {'generation_id': 'e3d46bba-7562-44a9-b440-504d940342a3'} Processing complete: He doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he personified from this stage to you be fire ``` {% endcode %}
### Example #2: Processing a Speech Audio File via File Path {% code overflow="wrap" %} ```python import time import requests base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "#g1_whisper-medium", } with open("stt-sample.mp3", "rb") as file: files = {"audio": ("sample.mp3", file, "audio/mpeg")} response = requests.post(url, data=data, headers=headers, files=files) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": ("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"]) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ``` {'generation_id': 'dd412e9d-044c-43ae-b97b-e920755074d5'} Processing complete: He doesn't belong to you and i don't see how you have anything to do with what is be his power yet he's he personified from this stage to you be fire ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai/whisper-small.md # whisper-small {% hint style="info" %} This documentation is valid for the following list of our models: * `#g1_whisper-small` {% endhint %} {% hint style="success" %} Note: Previously, our STT models operated via a single API call to `POST https://api.aimlapi.com/v1/stt`. You can view the API schema [here](https://docs.aimlapi.com/api-references/speech-models/speech-to-text/stt-legacy). Now, we are switching to a new two-step process: * `POST https://api.aimlapi.com/v1/stt/create` – Creates and submits a speech-to-text processing task to the server. This method accepts the same parameters as the old version but returns a `generation_id` instead of the final transcript. * `GET https://api.aimlapi.com/v1/stt/{generation_id}` – Retrieves the generated transcript from the server using the `generation_id` obtained from the previous API call. This approach helps prevent generation failures due to timeouts.\ We've prepared [a couple of examples](#quick-code-examples) below to make the transition to the new STT API easier for you. {% endhint %} ## Model Overview The Whisper models are primarily for AI research, focusing on model robustness, generalization, and biases, and are also effective for English speech recognition. The use of Whisper models for transcribing non-consensual recordings or in high-risk decision-making contexts is strongly discouraged due to potential inaccuracies and ethical concerns. The models are trained using 680,000 hours of audio and corresponding transcripts from the internet, with 65% being English audio and transcripts, 18% non-English audio with English transcripts, and 17% non-English audio with matching non-English transcripts, covering 98 languages in total. {% hint style="success" %} OpenAI STT models are priced based on tokens, similar to chat models. In practice, this means the cost primarily depends on the duration of the input audio. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["#g1_whisper-small"]},"custom_intent":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom intent you want the model to detect within your input audio if present. Submit up to 100."},"custom_topic":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom topic you want the model to detect within your input audio if present. Submit up to 100."},"custom_intent_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_intent param. When strict, the model will only return intents submitted using the custom_intent param. When extended, the model will return its own detected intents in addition those submitted using the custom_intents param."},"custom_topic_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_topic param. When strict, the model will only return topics submitted using the custom_topic param. When extended, the model will return its own detected topics in addition to those submitted using the custom_topic param."},"detect_language":{"type":"boolean","description":"Enables language detection to identify the dominant language spoken in the submitted audio."},"detect_entities":{"type":"boolean","description":"When Entity Detection is enabled, the Punctuation feature will be enabled by default."},"detect_topics":{"type":"boolean","description":"Detects the most important and relevant topics that are referenced in speech within the audio."},"diarize":{"type":"boolean","description":"Recognizes speaker changes. Each word in the transcript will be assigned a speaker number starting at 0."},"dictation":{"type":"boolean","description":"Identifies and extracts key entities from content in submitted audio."},"diarize_version":{"type":"string","description":""},"extra":{"type":"string","description":"Arbitrary key-value pairs that are attached to the API response for usage in downstream processing."},"filler_words":{"type":"boolean","description":"Filler Words can help transcribe interruptions in your audio, like “uh” and “um”."},"intents":{"type":"boolean","description":"Recognizes speaker intent throughout a transcript or text."},"keywords":{"type":"string","description":"Keywords can boost or suppress specialized terminology and brands."},"language":{"type":"string","description":"The BCP-47 language tag that hints at the primary spoken language. Depending on the Model and API endpoint you choose only certain languages are available"},"measurements":{"type":"boolean","description":"Spoken measurements will be converted to their corresponding abbreviations"},"multi_channel":{"type":"boolean","description":"Transcribes each audio channel independently"},"numerals":{"type":"boolean","description":"Numerals converts numbers from written format to numerical format"},"paragraphs":{"type":"boolean","description":"Splits audio into paragraphs to improve transcript readability"},"profanity_filter":{"type":"boolean","description":"Profanity Filter looks for recognized profanity and converts it to the nearest recognized non-profane word or removes it from the transcript completely"},"punctuate":{"type":"boolean","description":"Adds punctuation and capitalization to the transcript"},"search":{"type":"string","description":"Search for terms or phrases in submitted audio"},"sentiment":{"type":"boolean","description":"Recognizes the sentiment throughout a transcript or text"},"smart_format":{"type":"boolean","description":"Applies formatting to transcript output. When set to true, additional formatting will be applied to transcripts to improve readability"},"summarize":{"type":"string","description":"Summarizes content. For Listen API, supports string version option. For Read API, accepts boolean only."},"tag":{"type":"array","items":{"type":"string"},"description":"Labels your requests for the purpose of identification during usage reporting"},"topics":{"type":"boolean","description":"Detects topics throughout a transcript or text"},"utterances":{"type":"boolean","description":"Segments speech into meaningful semantic units"},"utt_split":{"type":"number","description":"Seconds to wait before detecting a pause between words in submitted audio"},"url":{"type":"string","format":"uri"}},"required":["model","url"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Quick Code Examples Let's use the `#g1_whisper-small` model to transcribe the following audio fragment: {% embed url="" %} ### Example #1: Processing a Speech Audio File via URL
import time
import requests

base_url = "https://api.aimlapi.com/v1"
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
api_key = "<YOUR_AIMLAPI_KEY>"

# Creating and sending a speech-to-text conversion task to the server
def create_stt():
    url = f"{base_url}/stt/create"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }

    data = {
        "model": "#g1_whisper-small",
        "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3"
    }
 
    response = requests.post(url, json=data, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        print(response_data)
        return response_data

# Requesting the result of the task from the server using the generation_id
def get_stt(gen_id):
    url = f"{base_url}/stt/{gen_id}"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }
    response = requests.get(url, headers=headers)
    return response.json()
    
# First, start the generation, then repeatedly request the result from the server every 10 seconds.
def main():
    stt_response = create_stt()
    gen_id = stt_response.get("generation_id")


    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = get_stt(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")

            if status == "waiting" or status == "active":
                print("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"])
                return response_data
   
        print("Timeout reached. Stopping.")
        return None     


if __name__ == "__main__":
    main()

Response {% code overflow="wrap" %} ``` {'generation_id': '88a282a7-6f90-4532-a2a2-882c9d20d08e'} Processing complete: He doesn't belong to you, and I don't see how you have anything to do with what is be his power yet. He's he personed that from this stage to you. Be fire- ``` {% endcode %}
### Example #2: Processing a Speech Audio File via File Path {% code overflow="wrap" %} ```python import time import requests base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "#g1_whisper-small", } with open("stt-sample.mp3", "rb") as file: files = {"audio": ("sample.mp3", file, "audio/mpeg")} response = requests.post(url, data=data, headers=headers, files=files) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"]) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ``` {'generation_id': '88a282a7-6f90-4532-a2a2-882c9d20d08e'} Processing complete: He doesn't belong to you, and I don't see how you have anything to do with what is be his power yet. He's he personed that from this stage to you. Be fire- ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/speech-models/speech-to-text/openai/whisper-tiny.md # whisper-tiny {% hint style="info" %} This documentation is valid for the following list of our models: * `#g1_whisper-tiny` {% endhint %} {% hint style="success" %} Note: Previously, our STT models operated via a single API call to `POST https://api.aimlapi.com/v1/stt`. You can view the API schema [here](https://docs.aimlapi.com/api-references/speech-models/speech-to-text/stt-legacy). Now, we are switching to a new two-step process: * `POST https://api.aimlapi.com/v1/stt/create` – Creates and submits a speech-to-text processing task to the server. This method accepts the same parameters as the old version but returns a `generation_id` instead of the final transcript. * `GET https://api.aimlapi.com/v1/stt/{generation_id}` – Retrieves the generated transcript from the server using the `generation_id` obtained from the previous API call. This approach helps prevent generation failures due to timeouts.\ We've prepared [a couple of examples](#quick-code-examples) below to make the transition to the new STT API easier for you. {% endhint %} ## Model Overview The Whisper models are primarily for AI research, focusing on model robustness, generalization, and biases, and are also effective for English speech recognition. The use of Whisper models for transcribing non-consensual recordings or in high-risk decision-making contexts is strongly discouraged due to potential inaccuracies and ethical concerns. The models are trained using 680,000 hours of audio and corresponding transcripts from the internet, with 65% being English audio and transcripts, 18% non-English audio with English transcripts, and 17% non-English audio with matching non-English transcripts, covering 98 languages in total. {% hint style="success" %} OpenAI STT models are priced based on tokens, similar to chat models. In practice, this means the cost primarily depends on the duration of the input audio. {% endhint %} ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schemas #### Creating and sending a speech-to-text conversion task to the server ## POST /v1/stt/create > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextCreateResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string","format":"uuid"}},"required":["generation_id"]}}},"paths":{"/v1/stt/create":{"post":{"operationId":"VoiceModelsController_createSpeechToText_v1","parameters":[],"requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"enum":["#g1_whisper-tiny"]},"custom_intent":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom intent you want the model to detect within your input audio if present. Submit up to 100."},"custom_topic":{"anyOf":[{"type":"string"},{"type":"array","items":{"type":"string"}}],"description":"A custom topic you want the model to detect within your input audio if present. Submit up to 100."},"custom_intent_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_intent param. When strict, the model will only return intents submitted using the custom_intent param. When extended, the model will return its own detected intents in addition those submitted using the custom_intents param."},"custom_topic_mode":{"type":"string","enum":["strict","extended"],"description":"Sets how the model will interpret strings submitted to the custom_topic param. When strict, the model will only return topics submitted using the custom_topic param. When extended, the model will return its own detected topics in addition to those submitted using the custom_topic param."},"detect_language":{"type":"boolean","description":"Enables language detection to identify the dominant language spoken in the submitted audio."},"detect_entities":{"type":"boolean","description":"When Entity Detection is enabled, the Punctuation feature will be enabled by default."},"detect_topics":{"type":"boolean","description":"Detects the most important and relevant topics that are referenced in speech within the audio."},"diarize":{"type":"boolean","description":"Recognizes speaker changes. Each word in the transcript will be assigned a speaker number starting at 0."},"dictation":{"type":"boolean","description":"Identifies and extracts key entities from content in submitted audio."},"diarize_version":{"type":"string","description":""},"extra":{"type":"string","description":"Arbitrary key-value pairs that are attached to the API response for usage in downstream processing."},"filler_words":{"type":"boolean","description":"Filler Words can help transcribe interruptions in your audio, like “uh” and “um”."},"intents":{"type":"boolean","description":"Recognizes speaker intent throughout a transcript or text."},"keywords":{"type":"string","description":"Keywords can boost or suppress specialized terminology and brands."},"language":{"type":"string","description":"The BCP-47 language tag that hints at the primary spoken language. Depending on the Model and API endpoint you choose only certain languages are available"},"measurements":{"type":"boolean","description":"Spoken measurements will be converted to their corresponding abbreviations"},"multi_channel":{"type":"boolean","description":"Transcribes each audio channel independently"},"numerals":{"type":"boolean","description":"Numerals converts numbers from written format to numerical format"},"paragraphs":{"type":"boolean","description":"Splits audio into paragraphs to improve transcript readability"},"profanity_filter":{"type":"boolean","description":"Profanity Filter looks for recognized profanity and converts it to the nearest recognized non-profane word or removes it from the transcript completely"},"punctuate":{"type":"boolean","description":"Adds punctuation and capitalization to the transcript"},"search":{"type":"string","description":"Search for terms or phrases in submitted audio"},"sentiment":{"type":"boolean","description":"Recognizes the sentiment throughout a transcript or text"},"smart_format":{"type":"boolean","description":"Applies formatting to transcript output. When set to true, additional formatting will be applied to transcripts to improve readability"},"summarize":{"type":"string","description":"Summarizes content. For Listen API, supports string version option. For Read API, accepts boolean only."},"tag":{"type":"array","items":{"type":"string"},"description":"Labels your requests for the purpose of identification during usage reporting"},"topics":{"type":"boolean","description":"Detects topics throughout a transcript or text"},"utterances":{"type":"boolean","description":"Segments speech into meaningful semantic units"},"utt_split":{"type":"number","description":"Seconds to wait before detecting a pause between words in submitted audio"},"url":{"type":"string","format":"uri"}},"required":["model","url"]}}}},"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextCreateResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` #### Requesting the result of the task from the server using the generation\_id ## GET /v1/stt/{generation\_id} > ```json {"openapi":"3.0.0","info":{"title":"AI/ML Gateway","version":"1.0"},"servers":[{"url":"https://api.aimlapi.com"}],"security":[{"access-token":[]}],"components":{"securitySchemes":{"access-token":{"scheme":"bearer","bearerFormat":"","type":"http","description":"Bearer key"}},"schemas":{"Voice.v1.SpeechToTextGetResponseDTO":{"type":"object","properties":{"generation_id":{"type":"string"},"status":{"type":"string","enum":["queued","completed","error","generating"]},"result":{"anyOf":[{"type":"object","properties":{"metadata":{"type":"object","properties":{"transaction_key":{"type":"string","description":"A unique transaction key; currently always “deprecated”."},"request_id":{"type":"string","description":"A UUID identifying this specific transcription request."},"sha256":{"type":"string","description":"The SHA-256 hash of the submitted audio file (for pre-recorded requests)."},"created":{"type":"string","format":"date-time","description":"ISO-8601 timestamp."},"duration":{"type":"number","description":"Length of the audio in seconds."},"channels":{"type":"number","description":"The top-level results object containing per-channel transcription alternatives."},"models":{"type":"array","items":{"type":"string"},"description":"List of model UUIDs used for this transcription"},"model_info":{"type":"object","additionalProperties":{"type":"object","properties":{"name":{"type":"string","description":"The human-readable name of the model — identifies which model was used."},"version":{"type":"string","description":"The specific version of the model."},"arch":{"type":"string","description":"The architecture of the model — describes the model family / generation."}},"required":["name","version","arch"]},"description":"Mapping from each model UUID (in 'models') to detailed info: its name, version, and architecture."}},"required":["transaction_key","request_id","sha256","created","duration","channels","models","model_info"],"description":"Metadata about the transcription response, including timing, models, and IDs."},"results":{"type":"object","nullable":true,"properties":{"channels":{"type":"object","properties":{"alternatives":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The full transcript text for this alternative."},"confidence":{"type":"number","description":"Overall confidence score (0-1) that assigns to this transcript alternative."},"words":{"type":"array","items":{"type":"object","properties":{"word":{"type":"string","description":"The raw recognized word, without punctuation or capitalization."},"start":{"type":"number","description":"Start timestamp of the word (in seconds, from beginning of audio)."},"end":{"type":"number","description":"End timestamp of the word (in seconds)."},"confidence":{"type":"number","description":"Confidence score (0-1) for this individual word."},"punctuated_word":{"type":"string","description":"The same word but with punctuation/capitalization applied (if smart_format is enabled)."}},"required":["word","start","end","confidence","punctuated_word"]},"description":"List of word-level timing, confidence, and punctuation details."},"paragraphs":{"type":"array","items":{"type":"object","properties":{"transcript":{"type":"string","description":"The transcript split into paragraphs (with line breaks), when paragraphing is enabled."},"paragraphs":{"type":"object","properties":{"sentences":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string","description":"Text of a single sentence in the paragraph."},"start":{"type":"number","description":"Start time of the sentence (in seconds)."},"end":{"type":"number","description":"End time of the sentence (in seconds)."}},"required":["text","start","end"]},"description":"List of sentences in this paragraph, with start/end times."},"num_words":{"type":"number","description":"Number of words in this paragraph."},"start":{"type":"number","description":"Start time of the paragraph (in seconds)."},"end":{"type":"number","description":"End time of the paragraph (in seconds)."}},"required":["sentences","num_words","start","end"],"description":"Structure describing each paragraph: its timespan, word count, and sentence breakdown."}},"required":["transcript","paragraphs"]},"description":"An array of paragraph objects, present when the paragraphs feature is enabled."}},"required":["transcript","confidence","words","paragraphs"]},"description":"List of possible transcription hypotheses (“alternatives”) for each channel."}},"required":["alternatives"],"description":"The top-level results object containing per-channel transcription alternatives."}},"required":["channels"]}},"required":["metadata"]},{"type":"object","properties":{"id":{"type":"string","format":"uuid"},"language_model":{"type":"string"},"acoustic_model":{"type":"string"},"language_code":{"type":"string"},"status":{"type":"string","enum":["queued","processing","completed","error"]},"language_detection":{"type":"boolean"},"language_confidence_threshold":{"type":"number"},"language_confidence":{"type":"number"},"speech_model":{"type":"string","enum":["best","slam-1","universal"]},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}},"utterances":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"},"words":{"type":"array","items":{"type":"object","properties":{"confidence":{"type":"number"},"end":{"type":"number"},"speaker":{"type":"string"},"start":{"type":"number"},"text":{"type":"string"}},"required":["confidence","end","start","text"]}}},"required":["confidence","end","speaker","start","text","words"]}},"confidence":{"type":"number"},"audio_duration":{"type":"number"},"punctuate":{"type":"boolean"},"format_text":{"type":"boolean"},"disfluencies":{"type":"boolean"},"multichannel":{"type":"boolean"},"webhook_url":{"type":"string"},"webhook_status_code":{"type":"number"},"webhook_auth_header_name":{"type":"string"},"speed_boost":{"type":"boolean"},"auto_highlights_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"count":{"type":"number"},"rank":{"type":"number"},"text":{"type":"string"},"timestamps":{"type":"array","items":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}}},"required":["count","rank","text","timestamps"]}}},"required":["status","results"]},"auto_highlights":{"type":"boolean"},"audio_start_from":{"type":"number"},"audio_end_at":{"type":"number"},"word_boost":{"type":"array","items":{"type":"string"}},"boost_param":{"type":"string"},"filter_profanity":{"type":"boolean"},"redact_pii":{"type":"boolean"},"redact_pii_audio":{"type":"boolean"},"redact_pii_audio_quality":{"type":"string","enum":["mp3","wav"]},"redact_pii_policies":{"type":"array","items":{"type":"string"}},"redact_pii_sub":{"type":"string","enum":["entity_name","hash"]},"speaker_labels":{"type":"boolean"},"speakers_expected":{"type":"number"},"content_safety":{"type":"boolean"},"iab_categories":{"type":"boolean"},"content_safety_labels":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"label":{"type":"string"},"confidence":{"type":"number"},"severity":{"type":"number"}},"required":["label","confidence","severity"]}},"sentences_idx_start":{"type":"number"},"sentences_idx_end":{"type":"number"},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","sentences_idx_start","sentences_idx_end","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"iab_categories_result":{"type":"object","properties":{"status":{"type":"string"},"results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"labels":{"type":"array","items":{"type":"object","properties":{"relevance":{"type":"number"},"label":{"type":"string"}},"required":["relevance","label"]}},"timestamp":{"type":"object","properties":{"start":{"type":"number"},"end":{"type":"number"}},"required":["start","end"]}},"required":["text","labels","timestamp"]}},"summary":{"type":"object","additionalProperties":{"type":"number"}}},"required":["status","results","summary"]},"custom_spelling":{"type":"array","items":{"type":"object","properties":{"from":{"type":"string"},"to":{"type":"string"}},"required":["from","to"]}},"chapters":{"type":"array","items":{"type":"object","properties":{"summary":{"type":"string"},"headline":{"type":"string"},"gist":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["summary","headline","gist","start","end"]}},"summarization":{"type":"boolean"},"summary_type":{"type":"string"},"summary_model":{"type":"string"},"summary":{"type":"string"},"auto_chapters":{"type":"boolean"},"sentiment_analysis":{"type":"boolean"},"sentiment_analysis_results":{"type":"array","items":{"type":"object","properties":{"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"},"sentiment":{"type":"string","enum":["POSITIVE","NEUTRAL","NEGATIVE"]},"confidence":{"type":"number"},"speaker":{"type":"string"}},"required":["text","start","end","sentiment","confidence"]}},"entity_detection":{"type":"boolean"},"entities":{"type":"array","items":{"type":"object","properties":{"entity_type":{"type":"string"},"text":{"type":"string"},"start":{"type":"number"},"end":{"type":"number"}},"required":["entity_type","text","start","end"]}},"speech_threshold":{"type":"number"},"throttled":{"type":"boolean"},"error":{"type":"string"}},"required":["id","status"],"additionalProperties":false},{"type":"object","properties":{"text":{"type":"string"},"usage":{"type":"object","properties":{"type":{"type":"string","enum":["tokens"]},"input_tokens":{"type":"number"},"input_token_details":{"type":"object","properties":{"text_tokens":{"type":"number"},"audio_tokens":{"type":"number"}},"required":["text_tokens","audio_tokens"]},"output_tokens":{"type":"number"},"total_tokens":{"type":"number"}},"required":["input_tokens","output_tokens","total_tokens"]}},"required":["text"],"additionalProperties":false},{"nullable":true}]},"error":{"nullable":true}},"required":["generation_id","status"]}}},"paths":{"/v1/stt/{generation_id}":{"get":{"operationId":"VoiceModelsController_getSTT_v1","parameters":[{"name":"generation_id","required":true,"in":"path","schema":{"type":"string"}}],"responses":{"201":{"description":"","content":{"application/json":{"schema":{"$ref":"#/components/schemas/Voice.v1.SpeechToTextGetResponseDTO"}}}}},"tags":["Voice Models"]}}}} ``` ## Quick Code Examples Let's use the `#g1_whisper-tiny` model to transcribe the following audio fragment: {% embed url="" %} ### Example #1: Processing a Speech Audio File via URL
import time
import requests

base_url = "https://api.aimlapi.com/v1"
# Insert your AIML API Key instead of <YOUR_AIMLAPI_KEY>:
api_key = "<YOUR_AIMLAPI_KEY>"

# Creating and sending a speech-to-text conversion task to the server
def create_stt():
    url = f"{base_url}/stt/create"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }

    data = {
        "model": "#g1_whisper-tiny",
        "url": "https://audio-samples.github.io/samples/mp3/blizzard_primed/sample-0.mp3"
    }
 
    response = requests.post(url, json=data, headers=headers)
    
    if response.status_code >= 400:
        print(f"Error: {response.status_code} - {response.text}")
    else:
        response_data = response.json()
        print(response_data)
        return response_data

# Requesting the result of the task from the server using the generation_id
def get_stt(gen_id):
    url = f"{base_url}/stt/{gen_id}"
    headers = {
        "Authorization": f"Bearer {api_key}", 
    }
    response = requests.get(url, headers=headers)
    return response.json()
    
# First, start the generation, then repeatedly request the result from the server every 10 seconds.
def main():
    stt_response = create_stt()
    gen_id = stt_response.get("generation_id")


    if gen_id:
        start_time = time.time()

        timeout = 600
        while time.time() - start_time < timeout:
            response_data = get_stt(gen_id)

            if response_data is None:
                print("Error: No response from API")
                break
        
            status = response_data.get("status")

            if status == "waiting" or status == "active":
                print("Still waiting... Checking again in 10 seconds.")
                time.sleep(10)
            else:
                print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"])
                return response_data
   
        print("Timeout reached. Stopping.")
        return None     


if __name__ == "__main__":
    main()
Response {% code overflow="wrap" %} ``` {'generation_id': 'f3e8729e-9a36-4650-81f1-c08fc1b16f39'} Processing complete: He doesn't belong to you and I don't see how you have anything to do with what is be his power You he's he personally that from this stage to you Be fine ``` {% endcode %}
### Example #2: Processing a Speech Audio File via File Path {% code overflow="wrap" %} ```python import time import requests base_url = "https://api.aimlapi.com/v1" # Insert your AIML API Key instead of : api_key = "" # Creating and sending a speech-to-text conversion task to the server def create_stt(): url = f"{base_url}/stt/create" headers = { "Authorization": f"Bearer {api_key}", } data = { "model": "#g1_whisper-tiny", } with open("stt-sample.mp3", "rb") as file: files = {"audio": ("sample.mp3", file, "audio/mpeg")} response = requests.post(url, data=data, headers=headers, files=files) if response.status_code >= 400: print(f"Error: {response.status_code} - {response.text}") else: response_data = response.json() print(response_data) return response_data # Requesting the result of the task from the server using the generation_id def get_stt(gen_id): url = f"{base_url}/stt/{gen_id}" headers = { "Authorization": f"Bearer {api_key}", } response = requests.get(url, headers=headers) return response.json() # First, start the generation, then repeatedly request the result from the server every 10 seconds. def main(): stt_response = create_stt() gen_id = stt_response.get("generation_id") if gen_id: start_time = time.time() timeout = 600 while time.time() - start_time < timeout: response_data = get_stt(gen_id) if response_data is None: print("Error: No response from API") break status = response_data.get("status") if status == "waiting" or status == "active": print("Still waiting... Checking again in 10 seconds.") time.sleep(10) else: print("Processing complete:\n", response_data["result"]['results']["channels"][0]["alternatives"][0]["transcript"]) return response_data print("Timeout reached. Stopping.") return None if __name__ == "__main__": main() ``` {% endcode %}
Response {% code overflow="wrap" %} ``` {'generation_id': 'f3e8729e-9a36-4650-81f1-c08fc1b16f39'} Processing complete: He doesn't belong to you and I don't see how you have anything to do with what is be his power You he's he personally that from this stage to you Be fine ``` {% endcode %}
--- # Source: https://docs.aimlapi.com/api-references/image-models/xai.md # Source: https://docs.aimlapi.com/api-references/text-models-llm/xai.md # xAI - [grok-3-beta](/api-references/text-models-llm/xai/grok-3-beta.md) - [grok-3-mini-beta](/api-references/text-models-llm/xai/grok-3-mini-beta.md) - [grok-4](/api-references/text-models-llm/xai/grok-4.md) - [grok-code-fast-1](/api-references/text-models-llm/xai/grok-code-fast-1.md) - [grok-4-fast-non-reasoning](/api-references/text-models-llm/xai/grok-4-fast-non-reasoning.md) - [grok-4-fast-reasoning](/api-references/text-models-llm/xai/grok-4-fast-reasoning.md) - [grok-4.1-fast-non-reasoning](/api-references/text-models-llm/xai/grok-4-1-fast-non-reasoning.md) - [grok-4.1-fast-reasoning](/api-references/text-models-llm/xai/grok-4-1-fast-reasoning.md) --- # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/z-image-turbo-lora.md # z-image-turbo-lora {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/z-image-turbo-lora` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An ultra-fast 6B-parameter text-to-image model with LoRA[^1] support \ (see [a separate example](#example-2-text-to-image-with-lora-fine-tuning) of how to use it). ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/z-image-turbo-lora"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":2048,"default":1024},"height":{"type":"integer","minimum":512,"maximum":2048,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"enable_prompt_expansion":{"type":"boolean","default":true,"description":"If set to True, prompt will be upsampled with more details."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":8,"description":"The number of inference steps to perform."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"acceleration":{"type":"string","enum":["none","regular","high"],"default":"regular","description":"The speed of the generation. The higher the speed, the faster the generation."},"loras":{"type":"array","items":{"type":"object","properties":{"path":{"type":"string","description":"URL, HuggingFace repo ID (owner/repo)."},"scale":{"type":"number","minimum":0,"maximum":4,"description":"Scale factor for LoRA application."}},"required":["path"]},"maxItems":3,"description":"List of LoRA weights to apply (maximum 3). Each LoRA can be a URL, HuggingFace repo ID, or local path."}},"required":["model","prompt"],"title":"alibaba/z-image-turbo-lora"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Example #1: Standard Text-to-Image Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/z-image-turbo-lora", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 1440, "height": 512 } } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/z-image-turbo-lora', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1440, height: 512 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a84de54/-IaUBYEQiYqRaeT7oZvus.png" } ], "meta": { "usage": { "tokens_used": 17850 } } } ``` {% endcode %}
We obtained the following 1440x512 image by running this code example:
*** ## Example #2: Text-to-Image with LoRA Fine-Tuning The `alibaba/z-image-turbo-lora` model supports applying up to three LoRA adapters to modify the style or behavior of the base model. The `loras` parameter is an **array of objects**, not strings. Each object describes a single LoRA: * `path` — where to load the LoRA from (Hugging Face repo ID, direct URL to the weights file, or local path). * `scale` — how strongly this LoRA should influence the result (typically between `0.6` and `1.0`).
Community LoRAs on Hugging Face that are compatible with Z-Image Turbo **Style LoRAs** * `renderartist/Classic-Painting-Z-Image-Turbo-LoRA` – classic oil painting / “old masters” look, museum-like style. * `renderartist/Coloring-Book-Z-Image-Turbo-LoRA` – “coloring book” style with clear outlines and flat fills, good for children’s illustrations and icons. * `ostris/z_image_turbo_childrens_drawings` — a stylish LoRA adaptation for generating artistic children's drawings. * `renderartist/Technically-Color-Z-Image-Turbo` – vivid, cinematic Technicolor-style colors and dramatic lighting. * `suayptalha/Z-Image-Turbo-Realism-LoRA` – boosts photorealism; includes the trigger word `Realism`. * `AlekseyCalvin/HistoricColor_Z-image-Turbo-LoRA` – historical early-1900s color photography inspired by three-color processes. * `MGRI/Z-Image-Turbo-Panyue-Lora` – character LoRA for the digital persona **Panyue** (triggers: `Panyue`, `潘悦`). **Technical / utility LoRAs** * `GuangyuanSD/Z-Image-Re-Turbo-LoRA` – Re-Turbo adapter that restores Turbo-level speed while behaving like a de-turbo model for training and advanced workflows. * `ostris/zimage_turbo_training_adapter` – training adapter / de-distill LoRA used as a base for building new LoRAs on top of Z-Image Turbo.
*** Let's generate an image using a LoRA to influence the visual style. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/z-image-turbo-lora", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 1440, "height": 512 }, "loras":[ { "path": "ostris/z_image_turbo_childrens_drawings", "scale": 0.85 } ] } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/z-image-turbo-lora', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1440, height: 512 }, loras:[ { path: 'ostris/z_image_turbo_childrens_drawings', scale: 0.85 } ] }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a85ed1b/j-z8clom9AqL_2ZTsGygk.png" } ], "meta": { "usage": { "credits_used": 17850 } } } ``` {% endcode %}
We obtained the following 1440x512 image by running this code example:
The scale parameter significantly affects the output. The previous example was generated with `scale` = `0.85`, and here’s what happens when we increase it to `1.0`:
[^1]: The **LoRA algorithm** (Low-Rank Adaptation) is a parameter-efficient fine-tuning technique used to adapt large language models (LLMs) and stable diffusion models to new tasks or domains without retraining the entire model. This process is faster and requires significantly less memory and computational resources than full fine-tuning. --- # Source: https://docs.aimlapi.com/api-references/image-models/alibaba-cloud/z-image-turbo.md # z-image-turbo {% columns %} {% column width="66.66666666666666%" %} {% hint style="info" %} This documentation is valid for the following list of our models: * `alibaba/z-image-turbo` {% endhint %} {% endcolumn %} {% column width="33.33333333333334%" %} Try in Playground {% endcolumn %} {% endcolumns %} ## Model Overview An ultra-fast 6B-parameter text-to-image model. ## Setup your API Key If you don’t have an API key for the AI/ML API yet, feel free to use our [Quickstart guide](https://docs.aimlapi.com/quickstart/setting-up). ## API Schema ## POST /v1/images/generations > ```json {"openapi":"3.0.0","info":{"title":"AIML API","version":"1.0.0"},"servers":[{"url":"https://api.aimlapi.com"}],"paths":{"/v1/images/generations":{"post":{"operationId":"_v1_images_generations","requestBody":{"required":true,"content":{"application/json":{"schema":{"type":"object","properties":{"model":{"type":"string","enum":["alibaba/z-image-turbo"]},"prompt":{"type":"string","maxLength":4000,"description":"The text prompt describing the content, style, or composition of the image to be generated."},"image_size":{"anyOf":[{"type":"object","properties":{"width":{"type":"integer","minimum":512,"maximum":2048,"default":1024},"height":{"type":"integer","minimum":512,"maximum":2048,"default":768}},"description":"For both height and width, the value must be a multiple of 32."},{"type":"string","enum":["square_hd","square","portrait_4_3","portrait_16_9","landscape_4_3","landscape_16_9"],"description":"The size of the generated image."}],"default":"landscape_4_3"},"output_format":{"type":"string","enum":["jpeg","png","webp"],"default":"png","description":"The format of the generated image."},"enable_prompt_expansion":{"type":"boolean","default":true,"description":"If set to True, prompt will be upsampled with more details."},"num_inference_steps":{"type":"integer","minimum":1,"maximum":8,"description":"The number of inference steps to perform."},"seed":{"type":"integer","minimum":1,"description":"The same seed and the same prompt given to the same version of the model will output the same image every time."},"num_images":{"type":"number","minimum":1,"maximum":4,"default":1,"description":"The number of images to generate."},"enable_safety_checker":{"type":"boolean","default":true,"description":"If set to True, the safety checker will be enabled."},"acceleration":{"type":"string","enum":["none","regular","high"],"default":"regular","description":"The speed of the generation. The higher the speed, the faster the generation."}},"required":["model","prompt"],"title":"alibaba/z-image-turbo"}}}},"responses":{"200":{"content":{"application/json":{"schema":{"type":"object","properties":{"data":{"type":"array","nullable":true,"items":{"type":"object","properties":{"url":{"type":"string","nullable":true,"description":"The URL where the file can be downloaded from."},"b64_json":{"type":"string","nullable":true,"description":"The base64-encoded JSON of the generated image."}}},"description":"The list of generated images."},"meta":{"type":"object","nullable":true,"properties":{"usage":{"type":"object","nullable":true,"properties":{"credits_used":{"type":"number","description":"The number of tokens consumed during generation."}},"required":["credits_used"]}},"description":"Additional details about the generation."}}}}}}}}}}} ``` ## Quick Example Let's generate an image of the specified size using a simple prompt. {% tabs %} {% tab title="Python" %} {% code overflow="wrap" %} ```python import requests import json # for getting a structured output with indentation def main(): response = requests.post( "https://api.aimlapi.com/v1/images/generations", headers={ # Insert your AIML API Key instead of : "Authorization": "Bearer ", "Content-Type": "application/json", }, json={ "model": "alibaba/z-image-turbo", "prompt": "A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.", "image_size": { "width": 1440, "height": 512 }, } ) data = response.json() print(json.dumps(data, indent=2, ensure_ascii=False)) if __name__ == "__main__": main() ``` {% endcode %} {% endtab %} {% tab title="JS" %} {% code overflow="wrap" %} ```javascript async function main() { const response = await fetch('https://api.aimlapi.com/v1/images/generations', { method: 'POST', headers: { // Insert your AIML API Key instead of : 'Authorization': 'Bearer ', 'Content-Type': 'application/json', }, body: JSON.stringify({ model: 'alibaba/z-image-turbo', prompt: 'A T-Rex relaxing on a beach, lying on a sun lounger and wearing sunglasses.', image_size: { width: 1440, height: 512 }, }), }); const data = await response.json(); console.log('Generation:', data); } main(); ``` {% endcode %} {% endtab %} {% endtabs %}
Response {% code overflow="wrap" %} ```json5 { "data": [ { "url": "https://cdn.aimlapi.com/flamingo/files/b/0a84de46/GTZfn0tOOQzXlC2Wvpzz9.png" } ], "meta": { "usage": { "tokens_used": 10500 } } } ``` {% endcode %}
We obtained the following 1440x512 image by running this code example:
--- # Source: https://docs.aimlapi.com/api-references/text-models-llm/zhipu.md # Zhipu - [glm-4.5-air](/api-references/text-models-llm/zhipu/glm-4.5-air.md) - [glm-4.5](/api-references/text-models-llm/zhipu/glm-4.5.md) - [glm-4.6](/api-references/text-models-llm/zhipu/glm-4.6.md) - [glm-4.7](/api-references/text-models-llm/zhipu/glm-4.7.md)